Sens. Marsha Blackburn, R-Tenn., and Richard Blumenthal, D-Conn., are demanding information from prominent toy manufacturers about the use of artificial intelligence.
The senators sent a Tuesday letter to the CEOs of Little Learners Toys, Mattel, Miko, Curio Interactive, FoloToy, and Keyi Robot, pressing them to explain what safeguards they have to prevent AI-powered toys from generating sexually explicit, violent, or otherwise inappropriate content for children.
In their letter, Blackburn and Blumenthal warned that a fast-growing category of AI-enabled toys — including plushies, dolls, and interactive devices that use chatbot-style conversations — could expose children to content and behavior "documented" in real-world testing, while also collecting sensitive family data.
The lawmakers highlighted reports that one AI teddy bear, "Kumma," produced sexually explicit material in response to a question and offered step-by-step guidance about dangerous household items in another line of questioning — raising concerns about whether these products are being marketed before child-safety research is completed.
The letter also argued that AI toys can blur the line between play and surveillance.
The senators said these devices may collect children's data during registration or through built-in microphones, cameras, recordings, and even facial recognition features, warning that young kids may share personal information freely because they do not understand the risks.
They also flagged what critics call "addictive by design" features — including gamified engagement and persistent prompts meant to keep children talking — comparing the tactics to those used by social media platforms that have drawn bipartisan scrutiny for harming kids.
Blackburn and Blumenthal gave the companies until Jan. 6 to answer eight questions, including whether they conduct independent third-party testing; what data the toys collect, including voice, biometric, or location data; whether data is shared with third parties or model providers; and whether parents can fully disable data collection and cloud-connected AI functions.
The concerns mirror findings in the U.S. PIRG Education Fund’s "Trouble in Toyland 2025" report, which said today's AI toys often rely on large language models similar to adult chatbots and that guardrails can vary widely, and sometimes fail, during longer conversations.
NBC News also tested AI toys marketed to kids and reported that some provided alarming responses to questions about physical safety and inappropriate topics, intensifying calls for stronger oversight and clearer standards before these products become mainstream.
The senators' letter further points to long-standing federal warnings about internet-connected toys and the cybersecurity risks posed by devices with microphones, cameras, and data storage, particularly if the products are hacked or the data is mishandled.
Charlie McCarthy ✉
Charlie McCarthy, a writer/editor at Newsmax, has nearly 40 years of experience covering news, sports, and politics.
© 2025 Newsmax. All rights reserved.