Parents of teens who died by suicide after AI chatbot interactions to testify to Congress

posted in: All news | 0

By MATT O’BRIEN, AP Technology Writer

The parents of teenagers who killed themselves after interactions with artificial intelligence chatbots are planning to testify to Congress on Tuesday about the dangers of the technology.

Related Articles


Private school for Native Hawaiians vows to defend admissions policy from conservative strategist


Fed convenes meeting with a governor newly appointed by Trump and another he wants to oust


Democrats plan to force Senate vote on Trump’s tariffs on Canada and Brazil


Trump heads to a UK state visit where trade and tech talks will mix with royal pomp


Watch live: FBI Director Kash Patel clashes with skeptical Democrats at contentious hearing

Matthew Raine, the father of 16-year-old Adam Raine of California, and Megan Garcia, the mother of 14-year-old Sewell Setzer III of Florida, are set to speak to a Senate hearing on the harms posed by AI chatbots.

Raine’s family sued OpenAI and its CEO Sam Altman last month alleging that ChatGPT coached the boy in planning to take his own life in April. Garcia sued another AI company, Character Technologies, for wrongful death last year, arguing that before his suicide, Sewell had become increasingly isolated from his real life as he engaged in highly sexualized conversations with the chatbot.

EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.

Hours before the Senate hearing, OpenAI pledged to roll out new safeguards for teens, including efforts to detect whether ChatGPT users are under 18 and controls that enable parents to set “blackout hours” when a teen can’t use ChatGPT. Child advocacy groups criticized the announcement as not enough.

“This is a fairly common tactic — it’s one that Meta uses all the time — which is to make a big, splashy announcement right on the eve of a hearing which promises to be damaging to the company,” said Josh Golin, executive director of Fairplay, a group advocating for children’s online safety.

“What they should be doing is not targeting ChatGPT to minors until they can prove that it’s safe for them,” Golin said. “We shouldn’t allow companies, just because they have tremendous resources, to perform uncontrolled experiments on kids when the implications for their development can be so vast and far-reaching.”

The Federal Trade Commission said last week it had launched an inquiry into several companies about the potential harms to children and teenagers who use their AI chatbots as companions.

The agency sent letters to Character, Meta and OpenAI, as well as to Google, Snap and xAI.

Leave a Reply

Your email address will not be published.