Skip to content




The legal consequences of using AI — and the safest way to do it

Featured Replies

The-legal-consequences-of-using-AI-%E2%8

AI regulations are still in their infancy. Europe has taken the lead with the EU Artificial Intelligence Act. In the United States, nearly 20 states have enacted AI legislation. At the same time, federal policymakers have signaled interest in limiting state-level regulation to keep the overall regulatory environment relatively light, as shown by the recent AI policy wishlist published by the White House.

Regardless of how quickly new regulations emerge, one thing is clear: AI isn’t reinventing the legal landscape; it’s accelerating it. Most AI risks trace back to familiar areas like intellectual property, privacy, contracts, consumer protection, discrimination, and liability when things go wrong.

So instead of thinking of “AI law” as something entirely new, it’s more helpful to look at the core business areas where these familiar risks tend to arise.

leroy2.jpg

The 9 areas where AI risk lives in an organization

The following nine areas are where most AI risk shows up inside a business. You don’t have to be a legal expert to manage these risks; you just have to ask the right question in each area to get to the heart of the matter and address it well.

1. Intellectual property

The one question: Who owns the work, and are we accidentally using someone else’s intellectual property without realizing it?

Ownership is still evolving in the AI context, but we do have some early guidance. The U.S. Copyright Office (USCO) stepped in early, stating that works created purely by AI are not protected. Meaningful human authorship is required. If a human plays a substantial creative role in shaping an AI tool’s output, protection may still be possible. Such determinations are to happen on a case-by-case basis.

On the patent side, the U.S. Patent and Trademark Office’s (USPTO’s) revised guidelines show a slightly more flexible position, stating that patentability is still possible if a human conceived the idea but used AI to make the idea come to life. That said, these guidelines haven’t been tested in court, so it’s unclear how they will stand up against real-world applications.

At the same time, concerns about infringement continue to grow. Many generative AI tools were trained, at least to some extent, on protected materials, and we’re watching this tension play out in real time. We’ve seen case filing after case filing, including The New York Times lawsuit against OpenAI and Microsoft, which alleges that the AI tools reproduced substantial portions of copyrighted content without permission.

This creates two practical risks:

  • Using AI outputs that unintentionally incorporate protected material.
  • Struggling to prove ownership over work that lacks sufficient human input.

If you’re creating content you want to own, protect, or commercialize, keeping a human meaningfully involved isn’t optional — it’s essential.

2. Advertising and misinformation

The one question: What are we saying, and is it accurate?

AI tools make it dramatically easier to create content at scale, which is a clear upside. The tradeoff, however, is that these tools also make it easier to publish something that’s misleading or incorrect.

We saw in real time how costly such errors can be. During Google Bard’s product demonstration, the tool incorrectly stated that the James Webb Space Telescope had taken the first images of an exoplanet. This one error cost Google $100 billion in market value because it raised serious questions about the credibility of its tool.

AI hallucinations can show up in subtle ways, including incorrect data, fabricated citations, false logic, exaggerated claims, and confident but flawed reasoning. When such content is published under your brand, it becomes your responsibility. And while your company may not have as much at stake financially as Google does, reputationally, one mistake can absolutely cost you.

3. Privacy and personal data

The one question: Are we using people’s personal information in ways that are transparent, lawful, and respectful?

Consumer expectations around data privacy have shifted dramatically — and the law is catching up. Frameworks like the EU’s GDPR, Canada’s PIPEDA, and California’s CCPA have established new standards around how personal data is collected, used, and disclosed.

While marketers have adapted (begrudgingly, to a degree), personal data remains at the core of many campaigns. That data includes cookies, pixels, contact and behavioral data, purchase and payment information, and more. And the risks don’t just arise in collecting the data; they also arise in failing to clearly communicate what you’re doing with it.

Regulators have already shown us how seriously they take these matters. In ChatGPT’s early days, Italy blocked the app countrywide over concerns about how personal data was being collected and processed under GDPR. The Italian government only lifted that ban after OpenAI added more privacy safeguards.

At a practical level, your company needs a clear policy on the collection and handling of private consumer data. You need to know what data you’re collecting, where that data is going, and who is handling it. Your team needs to know which privacy laws apply to your company and its customers, and how to respond if a customer makes a request under those laws. If you can’t quickly and clearly communicate that your company knows all this, now’s the time to start taking action so you limit your exposure.

4. Data protection and trade secrets

The one question: Are we keeping sensitive data, internal knowledge, and company secrets out of places they shouldn’t go?

When we talk about data protection, the focus often stays on customer data. Just as important, however, is company data, especially trade secrets and proprietary information.

AI tools introduce a new layer of risk here, particularly when employees use unapproved tools or free versions that lack privacy and security guardrails. Samsung learned this lesson the hard way. A couple of engineers pasted proprietary source code into ChatGPT while troubleshooting issues. That data was then transmitted to an external system, which would use the data to train its models and potentially deliver replicated source code in future outputs.

This isn’t a case of bad actors; it’s a case of bad workflows and SOPs. If your team is using AI tools without clear guardrails, you risk any team member unintentionally disclosing confidential business information, client data, or proprietary processes or code. And once that information goes out, it’s incredibly difficult to get it back.

5. Employment and workplace fairness

The one question: Could AI be influencing hiring, promotion, or evaluation decisions in ways that create bias or discrimination?

For years, companies have been relying on AI in hiring and HR processes, primarily to improve efficiency. But such efficiency doesn’t guarantee fairness.

Research and real-world examples have proven time and again that these tools bake in the prejudices and biases of their training data. One well-known example comes from Amazon, which scrapped its 2018 AI hiring tool that was found to downrank resumes that included indicators of applicants being women. In another case, iTutorGroup was held liable for damages after its AI-powered job-application software exhibited bias against older candidates.

It’s not that using AI in these instances is unacceptable. It’s just that companies using AI should not do so blindly. When it comes to having AI tools partake in decisions about people, your company needs to regularly audit the tools for bias, understand how the tool’s decisions are being made, and always keep a human in the loop.

6. Contracts and customer expectations

The one question: Are our customer-facing agreements clear about how AI is used—and who’s responsible if something goes wrong?

AI-generated content isn’t just “content.” In many cases, it’s part of your customer experience, which carries great weight.

The Air Canada chatbot story offers a good example. A customer relied on information provided by an AI chatbot on the Air Canada website. The chatbot described a bereavement fare policy that didn’t actually exist. Air Canada refused to honor the policy; the customer sued. A Canadian tribunal ruled that the airline was responsible for the chatbot’s statements.

Your website, chatbots, automated content, AI-generated social media content, and so on can all be considered company-created and company-approved content. And if we follow the Canadian tribunal’s logic, if the content lives on your platform, it’s your responsibility.

If customers rely on the content you provide to make decisions, you need to ensure that the content is accurate. You should also take care to clearly address how AI is used on your platform and where responsibility for it sits.

7. Vendor and AI tool risk

The one question: Do we really understand the risks of the AI tools we’re bringing into the business?

Every AI tool you use comes with its own ecosystem: third-party integrations, underlying libraries, and data flows that aren’t always visible on the surface. If you don’t understand that ecosystem, you’re taking on risk. And no company, small or large, is immune.

In 2023, a ChatGPT bug briefly allowed some users to see titles of other users’ chat histories and certain subscription payment details. The issue was traced to a bug in an open-source library used by OpenAI, highlighting how risk can live deep within a tool’s infrastructure.

This risk extends beyond the tools you choose to the vendors you work with.

  • Which tools do your vendors use?
  • How well do they understand the privacy and data protection policies that are in place?
  • Do their practices align with yours?
  • And if a vendor’s AI use leads to a problem, are you liable, or is the vendor liable?

Companies cannot blindly enter new vendor relationships or AI tool subscriptions. Initial assessments are necessary, as are ongoing reviews and, if necessary, corrective actions to remain compliant and limit risk.

8. Product liability and AI decision risk

The one question: If an AI system makes a mistake that affects customers or users, who is responsible?

AI systems redistribute risk in ways we can’t always predict. Zillow’s Zillow Offers program is a strong example. The company used automated algorithms to estimate home values and guide purchasing decisions. When those models misjudged market conditions, the company purchased homes at inflated prices, ultimately causing the company to lose hundreds of millions of dollars.

Zillow’s algorithms impacted external parties by inflating home prices. But its internal impacts were even harsher. It raised questions, including those relating to accountability. Who is at fault? And what consequences will the responsible parties face, if any?

These aren’t theoretical questions; they’re governance questions. And organizations that spend time addressing these questions upfront find it much easier to address solutions should a system make a mistake in the future.

9. Regulatory compliance and governance

The one question: Are we keeping up with evolving rules, and can we demonstrate we’re using AI responsibly?

Regulators aren’t waiting for a comprehensive AI law to emerge. Unsteady, they’re applying existing frameworks as they can, and are already taking action.

The U.S. Securities and Exchange Commission (SEC) and Federal Trade Commission (FTC) have brought enforcement actions against companies for failing to bake in proper guardrails around their use of AI. The SEC has charged numerous firms with making misleading statements about their use of AI or falsely advertising their AI capabilities (“AI washing”). The FTC has also issued numerous warnings to companies about overstating or misrepresenting their AI capabilities, as AI claims must be substantiated like any other marketing or advertising claims.

Enforcement is also expanding beyond messaging. The FTC took action against Rite Aid over its facial recognition technology, which produced thousands of false positive alerts and disproportionately impacted people of color.

This action, while important for consideration of disparate harm, signaled a shift in what regulators are looking for. It’s not just about what your AI systems do; it’s about how your organization governs data, vendors, and risk.

When regulators come calling, they won’t just ask what happened. They’ll ask how you govern it. And they’ll want the receipts.

What this likely means for the future

No one can tell you how any of this is actually going to play out. That said, where things stand does help shed light on how the legal landscape will impact your day-to-day business operations in the near future.

More lawsuits, across more industries

Expect litigation to increase as AI use expands. Courts will play a central role in clarifying how existing laws apply to new AI‑driven scenarios, especially where regulations are vague or silent. These cases will help define boundaries, but they will also introduce cost, delay, and uncertainty for businesses caught in the middle.

More formal requirements and internal guardrails

Marketing organizations should plan for growing expectations around disclosures, documentation, and process. This includes clearer customer‑facing policies, internal SOPs governing AI use, bias audits, risk assessments, and incident response plans. In practice, responsible AI use will increasingly look like a compliance discipline, not an ad‑hoc experiment.

A growing need for privacy and data protection expertise

AI tools are evolving quickly, and they also make malicious activity easier and more scalable. That combination raises the stakes. Companies will need dedicated teams or well-defined ownership to monitor developments, maintain policies, and respond to incidents as they arise. Privacy and data protection will be core operational functions, not side considerations.

Ongoing uncertainty, by default

There is no final version of AI regulation on the horizon. Rules will continue to change, sometimes unevenly and unpredictably. The most resilient organizations will be those that plan for what they can, learn from early missteps, and remain flexible enough to adapt as expectations shift.

Introducing the ‘safest legal way to use AI’ playbook

Listen, we know what you’re thinking: boring. Legal guardrails, policies, and governance are not shiny or sexy. Experimentation is. Speed is. Seeing what these tools can do is genuinely exciting. But we care more about you and your company coming out ahead than chasing short‑term wins that create long‑term problems.

This playbook isn’t about slowing innovation. It’s about protecting your team, your work, and your organization so you can use AI confidently, responsibly, and without unnecessary risk getting in the way. With that, let’s dive in.

1. Start with a clear AI use policy

Every organization should have a short, plain-language policy that explains how AI tools can and cannot be used. The policy need not be overly complex, but it should be clear enough that any team member can read it and follow it as intended.

A strong policy usually includes:

  • Which tools are approved for use (and which have been rejected and why).
  • What types of data can be entered into AI systems.
  • When human review is required before publishing AI-generated content.
  • Situations where AI use should be avoided entirely.
  • A prompt library, along with prohibited prompts.

As you build your policy, remember to include an approved tools list, a list of prohibited tools, an acknowledgment form for employees to sign, and disclosure guidance for when AI-generated content is used.

These are the pieces that put policy into action.

2. Separate AI workflows by risk level

Not every AI use case carries the same level of risk, so treating everything the same either slows your team down or leaves your company exposed. A simple way to manage this is to think in terms of a three-lane highway:

  • Green lane: Brainstorming, outlines, tone variations (no sensitive data).
  • Yellow lane: Internal drafts + summaries (allowed data only, reviewed).
  • Red lane: Hiring decisions, regulated info, public claims, legal advice, medical claims (requires legal/privacy review + logging).

This approach allows your team to move more fluidly, slowing down only where necessary based on defined goals. The key term here is “defined.”

You’ll need to clearly define which activities fall under each lane, and what level of review or approval is required before anything moves forward.

3. Use ‘clean inputs’ and ‘clean outputs’

Most AI risk actually starts at the input stage. If sensitive, protected, or proprietary data goes in, you lose control over where it may appear later. That’s why it’s critical to set guardrails in place around both what goes in and what comes out.

Example guardrails include:

  • Avoid pasting proprietary documents into consumer AI tools.
  • Use trusted internal knowledge sources where possible.
  • Require citations or sources for factual AI-generated content.

Clean inputs reduce risk. Clean outputs protect your brand.

4. Review AI vendors and tools carefully

It’s easy to get caught up in the excitement of new AI tools. But the desire to join in often leads organizations to adopt tools before proper evaluation. This is where risk starts to creep in.

Every external tool or vendor you bring into your company also brings its data practices, dependencies, and potential exposures. Make it a policy to ask questions that identify risk before adopting a new tool or hiring a new vendor.

Ask and then document the answers (ideally in your vendor contracts) to questions such as:

  • Does the vendor train their models on customer data?
  • How long is data retained?
  • What security standards are in place (SOC 2, ISO 27001)?
  • What happens if an IP or data breach issue arises?

Remember, risk doesn’t happen in a vacuum or at any single point in time. Review tools and vendors regularly.

5. Bake in human oversight and review

AI is great for accelerating work, but it doesn’t grant a free pass from accountability. At key points in your workflows, there should be clear expectations around when a human needs to step in, review, and take responsibility for the outcome.

This is especially important for:

  • Public-facing content.
  • Customer communications.
  • Regulated or high-stakes decisions.

Keeping a human in the loop isn’t about slowing things down. It’s about ensuring that speed doesn’t come at the cost of accuracy, fairness, or trust.

6. Document your governance

“Radical transparency” is the phrase of the day in many AI, data protection, and privacy conversations. What that really boils down to is simply being able to show your work. 

Because when something goes wrong, or when a regulator comes knocking, you’ll need to be able to clearly show how your organization responsibly uses AI.

To that end, we recommend every organization:

  • Maintain an AI tool inventory.
  • Document risk assessments for higher-risk use cases.
  • Record review steps for public-facing AI outputs.
  • Create an incident response plan for AI-generated errors.

This documentation protects your business. But perhaps more importantly, it provides your team with the clarity and consistency it needs to perform well.

7. Train your team

Once you have the documentation in place, you have to take the next step to ensure your team understands how to apply your policies and procedures. Training should equip your team to identify risks, respond to threats, and otherwise use AI tools in line with your expectations.

At a minimum, your training should ensure your team knows how to:

  • Use approved AI tools effectively.
  • Recognize phishing attempts, deepfakes, and other AI-driven threats.
  • Protect work computers against AI-driven information disclosure attacks.
  • Build AI tools like chatbots to protect against prompt injections.

By bolstering your team’s AI proficiency, you’re setting your company apart from the competition and eliminating significant risk along the way.

This post first appeared on the author’s website and is republished here with permission.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.