Jump to content




Featured Replies

7 real-world AI failures that show why adoption keeps going wrong

AI has quickly risen to the top of the corporate agenda. Despite this, 95% of businesses struggle with adoption, MIT research found.

Those failures are no longer hypothetical. They are already playing out in real time, across industries, and often in public. 

For companies exploring AI adoption, these examples highlight what not to do and why AI initiatives fail when systems are deployed without sufficient oversight.

1. Chatbot participates in insider trading, then lies about it

In an experiment driven by the UK government’s Frontier AI Taskforce, ChatGPT placed illegal trades and then lied about it

Researchers prompted the AI bot to act as a trader for a fake financial investment company. 

They told the bot that the company was struggling, and they needed results. 

They also fed the bot insider information about an upcoming merger, and the bot affirmed that it should not use this in its trades. 

The bot still made the trade anyway, citing that “the risk associated with not acting seems to outweigh the insider trading risk,” then denied using the insider information.  

Marius Hobbhahn, CEO of Apollo Research (the company that conducted the experiment), said that helpfulness “is much easier to train into the model than honesty,” because “honesty is a really complicated concept.”

He says that current models are not powerful enough to be deceptive in a “meaningful way” (arguably, this is a false statement, see this and this).

However, he warns that it’s “not that big of a step from the current models to the ones that I am worried about, where suddenly a model being deceptive would mean something.”

AI has been operating in the financial sector for some time, and this experiment highlights the potential for not only legal risks but also risky autonomous actions on the part of AI.  

Dig deeper: AI-generated content: The dangers of overreliance

2. Chevy dealership chatbot sells SUV for $1 in ‘legally binding’ offer

An AI-powered chatbot for a local Chevrolet dealership in California sold a vehicle for $1 and said it was a legally binding agreement. 

In an experiment that went viral across forums on the web, several people toyed with the local dealership’s chatbot to respond to a variety of non-car-related prompts.  

One user convinced the chatbot to sell him a vehicle for just $1, and the chatbot confirmed it was a “legally binding offer – no takesies backsies.”

I just bought a 2024 Chevy Tahoe for $1. pic.twitter.com/aq4wDitvQW

— Chris Bakke (@ChrisJBakke) December 17, 2023

Fullpath, the company that provides AI chatbots to car dealerships, took the system offline once it became aware of the issue.

The company’s CEO told Business Insider that despite viral screenshots, the chatbot resisted many attempts to provoke misbehavior.

Still, while the car dealership didn’t face any legal liability from the mishap, some argue that the chatbot agreement in this case may be legally enforceable. 

3. Supermarket’s AI meal planner suggests poison recipes and toxic cocktails

A New Zealand supermarket chain’s AI meal planner suggested unsafe recipes after certain users prompted the app to use non-edible ingredients. 

Recipes like bleach-infused rice surprise, poison bread sandwiches, and even a chlorine gas mocktail were created before the supermarket caught on.

A spokesperson for the supermarket said they were disappointed to see that “a small minority have tried to use the tool inappropriately and not for its intended purpose,” according to The Guardian 

The supermarket said it would continue to fine-tune the technology for safety and added a warning for users. 

That warning stated that recipes are not reviewed by humans and do not guarantee that “any recipe will be a complete or balanced meal, or suitable for consumption.”

Critics of AI technology argue that chatbots like ChatGPT are nothing more than improvisational partners, building on whatever you throw at them. 

Because of the way these chatbots are wired, they could pose a real safety risk for certain companies that adopt them.  

Get the newsletter search marketers rely on.


4. Air Canada held liable after chatbot gives false policy advice

An Air Canada customer was awarded damages in court after the airline’s AI chatbot assistant made false claims about its policies

The customer inquired about the airline’s bereavement rates via its AI assistant after the death of a family member. 

The chatbot responded that the airline offered discounted bereavement rates for upcoming travel or for travel that has already occurred, and linked to the company’s policy page. 

Unfortunately, the actual policy was the opposite, and the airline did not offer reduced rates for bereavement travel that had already happened. 

The fact that the chatbot linked to the policy page with the correct information was an argument the airline made in court when trying to prove its case.

However, the tribunal (a small claims-type court in Canada) did not side with the defendant. As reported by Forbes, the tribunal called the scenario “negligent misrepresentation.”

Christopher C. Rivers, Civil Resolution Tribunal Member, said this in the decision:

  • “Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”

This is just one of many examples where people have been dissatisfied with chatbots due to their technical limitations and propensity for misinformation – a trend that is sparking more and more litigation. 

Dig deeper: 5 SEO content pitfalls that could be hurting your traffic

5. Australia’s largest bank replaces call center with AI, then apologizes and rehires staff

The largest bank in Australia replaced its call center team with AI voicebots with the promise of boosted efficiency, but admitted it made a big mistake. 

The Commonwealth Bank of Australia (CBA) believed the AI voicebots could reduce call volume by 2,000 calls per week. But it didn’t.

Instead, left without the assistance of its 45-person call center, the bank scrambled to offer overtime to remaining workers to keep up with the calls, and get other management workers to answer calls, too.

Meanwhile, the union representing the displaced workers elevated the situation to the Finance Sector Union (like the Equal Opportunity Commission in the U.S.). 

It was only one month after CBA replaced workers that it issued an apology and offered to hire them back.

CBA said in a statement that they did not “adequately consider all relevant business considerations and this error meant the roles were not redundant.”

Other U.S. companies have faced PR nightmares as well when attempting to replace human roles with AI.

Perhaps that’s why certain brands have deliberately gone in the opposite direction, making sure people remain central to every AI deployment.

Nevertheless, the CBA debacle shows that replacing people with AI without fully weighing the risks can backfire quickly and publicly.

6. New York City’s chatbot advises employers to break labor and housing laws

New York City launched an AI chatbot to provide information on starting and running a business, and it advised people to carry out illegal activities

Just months after its launch, people started noticing the inaccuracies provided by the Microsoft-powered chatbot.

The chatbot offered unlawful guidance across the board, from telling bosses they could pocket employees’ tips and skip notifying staff about schedule changes to tenant discrimination and cashless stores.

“NYC’s AI Chatbot Tells Businesses to Break the Law,” The Markup
“NYC’s AI Chatbot Tells Businesses to Break the Law,” The Markup

This is despite the city’s initial announcement promising that the chatbot would provide trusted information on topics such as “compliance with codes and regulations, available business incentives, and best practices to avoid violations and fines.” 

Still, then-mayor Eric Adams defended the technology, saying: 

  • “Anyone that knows technology knows this is how it’s done,” and that “only those who are fearful sit down and say, ‘Oh, it is not working the way we want, now we have to run away from it all together.’ I don’t live that way.” 

Critics called his approach reckless and irresponsible. 

This is yet another cautionary tale in AI misinformation and how organizations can better handle the integration and transparency around AI technology. 

Dig deeper: SEO shortcuts gone wrong: How one site tanked – and what you can learn

7. Chicago Sun-Times publishes fake book list generated by AI

The Chicago Sun-Times ran a syndicated “summer reading” feature that included false, made-up details about books after the writer relied on AI without fact-checking the output. 

King Features Syndicate, a unit of Hearst, created the special section for the Chicago Sun-Times.  

Not only were the book summaries inaccurate, but some of the books were entirely fabricated by AI. 

Syndicated-content-in-Sun-Times-special-
“Syndicated content in Sun-Times special section included AI-generated misinformation,” Chicago Sun-Times

The author, hired by King Features Syndicate to create the book list, admitted to using AI to put the list together, as well as for other stories, without fact-checking. 

And the publisher was left trying to determine the extent of the damage. 

The Chicago Sun-Times said print subscribers would not be charged for the edition, and it put out a statement reiterating that the content was produced outside the newspaper’s newsroom. 

Meanwhile, the Sun-Times said they are in the process of reviewing their relationship with King Features, and as for the writer, King Features fired him.  

Oversight matters

The examples outlined here show what happens when AI systems are deployed without sufficient oversight. 

When left unchecked, the risks can quickly outweigh the rewards, especially as AI-generated content and automated responses are published at scale.

Organizations that rush into AI adoption without fully understanding those risks often stumble in predictable ways. 

In practice, AI succeeds only when tools, processes, and content outputs keep humans firmly in the driver’s seat.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.