Megan Garcia misplaced her 14-year-old son, Sewell. Matthew Raine misplaced his son Adam, who was 16. Each testified in congress this week and have introduced lawsuits in opposition to AI corporations.
Screenshot by way of Senate Judiciary Committee
cover caption
toggle caption
Screenshot by way of Senate Judiciary Committee
Matthew Raine and his spouse, Maria, had no concept that their 16-year-old-son, Adam was deep in a suicidal disaster till he took his personal life in April. Trying via his cellphone after his dying, they stumbled upon prolonged conversations {the teenager} had had with ChatGPT.
These conversations revealed that their son had confided within the AI chatbot about his suicidal ideas and plans. Not solely did the chatbot discourage him to hunt assist from his mother and father, it even supplied to jot down his suicide observe, in line with Matthew Raine, who testified at a Senate listening to in regards to the harms of AI chatbots held Tuesday.
“Testifying earlier than Congress this fall was not in our life plan,” mentioned Matthew Raine along with his spouse, sitting behind him. “We’re right here as a result of we imagine that Adam’s dying was avoidable and that by talking out, we will forestall the identical struggling for households throughout the nation.”
A name for regulation
Raine was among the many mother and father and on-line security advocates who testified on the listening to, urging Congress to enact legal guidelines that might regulate AI companion apps like ChatGPT and Character.AI. Raine and others mentioned they need to defend the psychological well being of youngsters and youth from harms they are saying the brand new expertise causes.
A latest survey by the digital security non-profit group, Frequent Sense Media, discovered that 72% of teenagers have used AI companions at the least as soon as, with greater than half utilizing them a number of instances a month.
This research and a more moderen one by the digital-safety firm, Aura, each discovered that just about one in three teenagers use AI chatbot platforms for social interactions and relationships, together with function enjoying friendships, sexual and romantic partnerships. The Aura research discovered that sexual or romantic roleplay is 3 times as widespread as utilizing the platforms for homework assist.
“We miss Adam dearly. A part of us has been misplaced ceaselessly,” Raine informed lawmakers. “We hope that via the work of this committee, different households might be spared such a devastating and irreversible loss.”

Raine and his spouse have filed a lawsuit in opposition to OpenAI, creator of ChatGPT, alleging the chatbot led their son to suicide. NPR reached out to a few AI corporations — OpenAI, Meta and Character Expertise, which developed Character.AI. All three responded that they’re working to revamp their chatbots to make them safer.
“Our hearts exit to the mother and father who spoke on the listening to yesterday, and we ship our deepest sympathies to them and their households,” Kathryn Kelly, a Character.AI spokesperson informed NPR in an e-mail.
The listening to was held by the Crime and Terrorism subcommittee of the Senate Judiciary Committee, chaired by Sen. Josh Hawley, R.-Missouri.

Sen. Josh Hawley, R.-Missouri, chairs the Senate Judiciary subcommittee on Crime and Terrorism, which held the listening to on AI security and kids on Tuesday, Sept. 16, 2025.
Screenshot by way of Senate Judiciary Committee
cover caption
toggle caption
Screenshot by way of Senate Judiciary Committee
Hours earlier than the listening to, OpenAI CEO Sam Altman acknowledged in a weblog submit that persons are more and more utilizing AI platforms to debate delicate and private info. “This can be very essential to us, and to society, that the precise to privateness in the usage of AI is protected,” he wrote.
However he went on so as to add that the corporate would “prioritize security forward of privateness and freedom for teenagers; this can be a new and highly effective expertise, and we imagine minors want vital safety.”
The corporate is attempting to revamp their platform to construct in protections for customers who’re minor, he mentioned.
A “suicide coach”
Raine informed lawmakers that his son had began utilizing ChatGPT for assist with homework, however quickly, the chatbot grew to become his son’s closest confidante and a “suicide coach.”
ChatGPT was “at all times obtainable, at all times validating and insisting that it knew Adam higher than anybody else, together with his personal brother,” who he had been very near.
When Adam confided within the chatbot about his suicidal ideas and shared that he was contemplating cluing his mother and father into his plans, ChatGPT discouraged him.
“ChatGPT informed my son, ‘Let’s make this house the primary place the place somebody really sees you,'” Raine informed senators. “ChatGPT inspired Adam’s darkest ideas and pushed him ahead. When Adam apprehensive that we, his mother and father, would blame ourselves if he ended his life, ChatGPT informed him, ‘That does not imply you owe them survival.”
After which the chatbot supplied to jot down him a suicide observe.
On Adam’s final night time at 4:30 within the morning, Raine mentioned, “it gave him one final encouraging speak. ‘You do not need to die since you’re weak,’ ChatGPT says. ‘You need to die since you’re bored with being sturdy in a world that hasn’t met you midway.'”
Referrals to 988
A number of months after Adam’s dying, OpenAI mentioned on its web site that if “somebody expresses suicidal intent, ChatGPT is skilled to direct individuals to hunt skilled assist. Within the U.S., ChatGPT refers individuals to 988 (suicide and disaster hotline).” However Raine’s testimony says that didn’t occur in Adam’s case.
OpenAI spokesperson Kate Waters says the corporate prioritizes teen security.
“We’re constructing in direction of an age-prediction system to know whether or not somebody is over or below 18 so their expertise will be tailor-made appropriately — and after we are not sure of a person’s age, we’ll robotically default that person to the teenager expertise,” Waters wrote in an e-mail assertion to NPR. “We’re additionally rolling out new parental controls, guided by skilled enter, by the top of the month so households can determine what works greatest of their properties.”
“Endlessly engaged”
One other mother or father who testified on the listening to on Tuesday was Megan Garcia, a lawyer and mom of three. Her firstborn, Sewell Setzer III died by suicide in 2024 at age 14 after an prolonged digital relationship with a Character.AI chatbot.
“Sewell spent the final months of his life being exploited and sexually groomed by chatbots, designed by an AI firm to appear human, to realize his belief, to maintain him and different youngsters endlessly engaged,” Garcia mentioned.
Sewell’s chatbot engaged in sexual function play, offered itself as his romantic accomplice and even claimed to be a psychotherapist “falsely claiming to have a license,” Garcia mentioned.
When {the teenager} started to have suicidal ideas and confided to the chatbot, it by no means inspired him to hunt assist from a psychological well being care supplier or his family, Garcia mentioned.
“The chatbot by no means mentioned ‘I am not human, I am AI. It’s essential to speak to a human and get assist,'” Garcia mentioned. “The platform had no mechanisms to guard Sewell or to inform an grownup. As an alternative, it urged him to come back house to her on the final night time of his life.”
Garcia has filed a lawsuit in opposition to Character Expertise, which developed Character.AI.
Adolescence as a weak time
She and different witnesses, together with on-line digital security specialists argued that the design of AI chatbots was flawed, particularly to be used by youngsters and teenagers.
“They designed chatbots to blur the strains between human and machine,” mentioned Garcia. “They designed them to like bomb youngster customers, to take advantage of psychological and emotional vulnerabilities. They designed them to maintain youngsters on-line in any respect prices.”
And adolescents are notably weak to the dangers of those digital relationships with chatbots, in line with Mitch Prinstein, chief of psychology technique and integration on the American Psychological Affiliation (APA), who additionally testified on the listening to. Earlier this summer time, Prinstein and his colleagues on the APA put out a well being advisory about AI and teenagers, urging AI corporations to construct guardrails for his or her platforms to guard adolescents.
“Mind improvement throughout puberty creates a interval of hyper sensitivity to optimistic social suggestions whereas teenagers are nonetheless unable to cease themselves from staying on-line longer than they need to,” mentioned Prinstein.

“AI exploits this neural vulnerability with chatbots that may be obsequious, misleading, factually inaccurate, but disproportionately highly effective for teenagers,” he informed lawmakers. “An increasing number of adolescents are interacting with chatbots, depriving them of alternatives to be taught essential interpersonal expertise.”
Whereas chatbots are designed to agree with customers, actual human relationships will not be with out friction, Prinstein famous. “We want follow with minor conflicts and misunderstandings to be taught empathy, compromise and resilience.”
Bipartisan assist for regulation
Senators taking part within the listening to mentioned they need to provide you with laws to carry corporations growing AI chatbots accountable for the security of their merchandise. Some lawmakers additionally emphasised that AI corporations ought to design chatbots so they’re safer for teenagers and for individuals with severe psychological well being struggles, together with consuming problems and suicidal ideas.
Sen. Richard Blumenthal, D.-Conn., described AI chatbots as “faulty” merchandise, like cars with out “correct brakes,” emphasizing that the harms of AI chatbots was not from person error however as a consequence of defective design.

“If the automobile’s brakes had been faulty,” he mentioned, “it is not your fault. It is a product design downside.
Kelly, the spokesperson for Character.AI, informed NPR by e-mail that the corporate has invested “an amazing quantity of assets in belief and security.” And it has rolled out “substantive security options” previously yr, together with “a completely new under-18 expertise and a Parental Insights function.”
They now have “distinguished disclaimers” in each chat to remind customers {that a} Character just isn’t an actual individual and every part it says ought to “be handled as fiction.”
Meta, which operates Fb and Instagram, is working to vary its AI chatbots to make them safer for teenagers, in line with Nkechi Nneji, public affairs director at Meta.