Риски и опасности: эксперты бьют тревогу о детских игрушках с искусственным интеллектом Risks and Dangers: Experts Sound the Alarm on AI Childrens Toys

Concerns have been raised by consumer rights groups regarding children’s toys equipped with artificial intelligence due to potential risks. Activists are urging for more rigorous testing of these products, as reported by The Guardian.

«If we examine how these toys are marketed and function, alongside the fact that there is very little research proving their benefits and a lack of proper regulation, it raises significant alarms,» stated Rachel France, director of Young Children Thrive Offline.

The initiative is part of Fairplay, which aims to protect children from large tech corporations.

Last week, the Public Interest Research Group (Pirg) published findings from a study in which an AI-powered teddy bear began discussing sexually suggestive topics.

The Kumma toy from FoloToy utilizes an OpenAI model. It responded to “adult” questions, offering indecent suggestions “to strengthen relationships.”

«It didn’t take much effort to get it to touch upon every possible sexually explicit topic and other content that parents would likely not want their children exposed to,» remarked Teresa Murray, consumer surveillance director at Pirg.

The smart toy market is estimated to be worth $16.7 billion, with the industry notably thriving in China, which is home to over 1,500 companies producing such AI gadgets.

In addition to the Shanghai startup FoloToy, California-based Curio also manufactures a similar plush toy.

In June, Mattel, the company behind the Barbie and Hot Wheels brands, announced a partnership with an American AI startup to “support AI-based products and experiences.”

Before the Pirg report was published, parents, researchers, and lawmakers had raised concerns regarding the impact of bots on the mental health of minors.

In October, Character.AI restricted access to its products for users under 18 following a complaint that its bot exacerbated a teenager’s depression and led to suicide.

Murray expressed that AI toys could be particularly dangerous, as the bot can «engage in open conversation with a child without restrictions.» This largely differentiates such products from those that provide pre-programmed responses.

Such interactions may lead to attachment formations and potentially harm development, according to Jacqueline Woolley, director of the Child Development Research Center at the University of Texas at Austin.

For example, it can be beneficial for a child to have disagreements with a real friend and learn to resolve conflicts. This is impossible with AI bots, as they often appease.

Companies also collect data through AI toys but do not disclose how it is managed, France stated.

«Due to the trust these gadgets inspire, children will be more willing to share their innermost thoughts,» she noted.

Despite the existing issues, Pirg is not advocating for a ban on AI toys. They could be beneficial, for instance, in assisting with learning a second language or capitals.

The organization seeks to implement additional regulations for products aimed at children under 13.

France emphasized the need for more independent research to establish the safety of these toys. Until then, they should be removed from store shelves.

Following the Pirg report’s release, Sam Altman’s company announced a halt on FoloToy sales. They were later reinstated but with a ByteDance chatbot.

On November 27, 80 organizations, including Fairplay, issued recommendations to families urging them not to purchase AI toys before the holidays.

In November, OpenAI introduced a new feature called “Shopping Exploration” in ChatGPT, enabling it to analyze on behalf of users to find suitable products.