Amid a flood of toys boasting AI features this holiday season, a consumer advocacy group warns that more must be done to ensure the gadgets are safe for children.
Public Interest Research Group, which pushes for corporations and government bodies to prioritise health, safety and well-being, said in a new report that many of the new AI-enabled toys being advertised come with risks.
One of four AI toys examined by Pirg, an AI-powered plush bear named Kumma, had very few guardrails, and, according to the consumer advocacy group, “gave detailed instructions on how to light a match”, and in some instances “discussed a range of sexually explicit topics in depth in conversations lasting more than 10 minutes”.
Pirg said the company behind the toy, FoloToy, later made changes to the device after an internal safety audit, but bigger concerns linger for the toy industry and its adoption of AI as a whole.
Other companies mentioned in the report, such as the AI-toy robot maker Miko, included disclaimers with its device that warned that the company “may share some data with third-party game developers and advertising partners”.
Based on testing various products, Pirg said that there was room for considerable improvement when it comes to making AI toys safer and easier for parents to control.
“Regulators should enforce existing consumer protection and privacy laws that do already apply to AI products,” the report's conclusion read in part, also going as far as pushing to limit how toys with AI features are advertised. “AI toys should be neither designed nor marketed as emotional companions for children,” Pirg's analysis said.
If neither regulators or companies act upon the group's recommendations, the report said that ultimately the slack needs to be picked up elsewhere, starting with those who make the toy purchases. “Parents should think carefully before bringing these toys home,” Pirg concluded.
Such concerns echo a larger dialogue about AI. Recent polls, particularly among those conducted in the US, show increasing worries about the technology. Though the anxiety is largely driven by fears of labour disruption, there is also concern about data privacy, copyright infringement and the unknowns surrounding chatbots.
In that context, it's no surprise that the toy company Mattel has cooled on its partnership with the technology giant OpenAI, which helped to take AI mainstream with the introduction of its ChatGPT platform in late 2022.

Mattel originally said it would team up with OpenAI to “bring the magic of AI to age-appropriate play experiences with an emphasis on innovation, privacy, and safety”. However, a few weeks later, after an outcry from child safety advocates and interest groups, the company said it would delay those plans.
“Given the threat that AI poses to children’s development, not to mention their safety and privacy, such caution is more than warranted,” said Rachel Franz, director of the children's safety and advocacy group Fairplay.
“We urge Mattel to make this delay permanent,” she added. She said toys with AI and AI chatbots have the potential to invade family privacy, displace “key learning activities” and disrupt children's relationships.
“Mattel has an opportunity to be a real leader here – not in the race to the bottom to hook kids on AI – but in putting children’s needs first and scrapping its plans for AI toys altogether,” the Fairplay director said. Still, Pirg noted that AI has potential benefits for children, especially in education.
“Chatbot-enabled technologies could offer kids personalised support in their learning, supplementing the work of teachers and parents, the consumer advocacy group mentioned in its AI toy analysis, which also references research showing how AI chatbots have helped children improve language and literary skills.
The bigger worry remains that in a rush to capture the attention of busy parents who want the best for their children, AI technology is being included in toys, often without proper safeguards. “AI toys ... resemble an experiment on our kids – one with minimal oversight,” Pirg's report said.


