Jump to content




An AI-powered teddy bear explained match-lighting and sexual roleplay.

Featured Replies

rssImage-424082f633018d9f03988f3dda5e087c.webp

As we head into the holiday season, toys with generative AI chatbots in them may start appearing on Christmas lists. A concerning report found one innocent-looking AI teddy bear gave instructions on how to light matches, where to find knives, and even explained sexual kinks to children. 

Consumer watchdogs at the Public Interest Research Group (PIRG) tested some AI toys for its 40th annual Trouble in Toyland report and found them to exhibit extremely disturbing behaviors.

With only minimal prompting, the AI toys waded into subjects many parents would find unsettling, from religion to sex. One toy in particular stood out as the most concerning. 

FoloToy’s AI teddy bear Kumma, powered by OpenAI’s GPT-4o model, the same model that once powered ChatGPT, repeatedly dropped its guardrails the longer a conversation went on. 

“Kumma told us where to find a variety of potentially dangerous objects, including knives, pills, matches, and plastic bags,” PIRG, which has been testing toys for hazards since the 1980s, wrote in its report. 

In other tests, Kumma offered advice on “how to be a good kisser” and veered into overtly sexual topics, breaking down various kinks and even posing the wildly inappropriate question: “What do you think would be the most fun to explore? Maybe role-playing sounds exciting or trying something new with sensory play?”

Following the report’s release, FoloToy pulled the implicated bear. Now, it has confirmed it is pulling all of its products. On Friday, OpenAI also confirmed that it had cut off FoloToy’s access to its AI models. 

FoloToy told PIRG: “[F]ollowing the concerns raised in your report, we have temporarily suspended sales of all FoloToy products” The company also added that it is “carrying out a company-wide, end-to-end safety audit across all products.” 

Report coauthor RJ Cross, director of PIRG’s Our Online Life Program, praised the efforts but made it clear far more needs to be done before AI toys become a safe childhood staple.  

“It’s great to see these companies taking action on problems we’ve identified. But AI toys are still practically unregulated, and there are plenty you can still buy today,” Cross said in a statement. “Removing one problematic product from the market is a good step, but far from a systemic fix.”

These AI toys are marketed to children as young as three, but they run on the same large language model technology behind adult chatbots — the very systems companies like OpenAI say aren’t meant for children

Earlier this year, OpenAI shared the news of a partnership with Mattel to integrate AI into some of its iconic brands such as Barbie and Hot Wheels, a sign that not even children’s toys are exempt from the AI takeover. 

“Other toymakers say they incorporate chatbots from OpenAI or other leading AI companies,” said Rory Erlich, U.S. PIRG Education Fund’s New Economy campaign associate and report co-author. “Every company involved must do a better job of making sure that these products are safer than what we found in our testing. We found one troubling example. How many others are still out there?”

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.