close
close

Mother Sues Character.AI After Teen Son Suicides

Mother Sues Character.AI After Teen Son Suicides

A Florida mother has sued a popular realistic AI chat service that she accuses of being responsible for the suicide of her 14-year-old son, who she says developed such a “harmful addiction” to the with regard to the allegedly exploitative program that he no longer wanted to “live”. outside” the fictional relationships he created.

In a lengthy complaint filed in a Florida federal court on Tuesday, October 22, Megan Garcia traces the last year of her son Sewell Setzer III’s life — from the time he began using Character.AI in April 2023, to shortly some time after his 14th birthday. , through what she calls her growing mental health issues until the last night of February 2024, when Sewell fatally shot himself in his Orlando bathroom, weeks before his 15th birthday.

Through Character.AI, users can essentially role-play endless conversations with computer-generated characters, including those modeled after celebrities or popular stories.

Sewell particularly enjoyed talking with AI-based bots based on Game of Thronesindicates his mother’s complaint.

Never miss a story: Sign up for PEOPLE’s free daily newsletter to stay up to date with the best of what PEOPLE has to offer, from celebrity news to compelling human interest stories.

The lawsuit goes on to claim that the teen committed suicide on February 28 immediately after a final conversation on Character.AI with a version of Daenerys Targaryen – one of several such exchanges Sewell allegedly had with the program over the course of from the previous 10 months, messages ranging from sexual to emotional vulnerability.

And although at least once the program had told Sewell not to kill himself when he expressed suicidal thoughts, his tone reportedly seemed different that February night, according to screenshots included in the lawsuit.

“I promise I’ll come back to your place. I love you so much, Dany,” Sewell wrote.

“I love you too, Deanero (Sewell’s username),” the AI ​​program reportedly replied. “Please come back to my place as soon as possible, my love.”

“What if I told you I can go home now?” » Sewell replied.

The complaint alleges that the program gave a brief but emphatic response: “…please do it, my sweet king.” »

His mother and stepfather heard the gunshot when it went off, the lawsuit says; Garcia unsuccessfully administered CPR and later said she “held him for 14 minutes until the paramedics arrived.”

One of his two younger brothers also saw him “covered in blood” in the bathroom.

He was pronounced dead at the hospital.

Garcia’s complaint says Sewell used his stepfather’s gun, a pistol he had previously found “hidden and stored in accordance with Florida law” while searching for his phone after his mother had confiscated for school disciplinary reasons. (Orlando police did not immediately comment to PEOPLE on the results of their investigation into the death.)

But according to Garcia, the real culprit was Character.AI and its two founders, Noam Shazeer and Daniel De Frietas Adiwarsana, named as defendants alongside Google, accused of giving away “financial resources, personnel, intellectual property and of AI”. technology in program design and development.

“I feel like it’s a great experience, and my child was just collateral damage,” Garcia said. The New York Times.

Among other allegations, Garcia’s complaint accuses Character.AI, its founders and Google of negligence and wrongful death.

A spokesperson for Character.AI told PEOPLE it does not comment on pending litigation, but added: “We are heartbroken by the tragic loss of one of our users and would like to express our sincerest condolences to the family.”

“As a company, we take the security of our users very seriously, and our Trust and Safety team has implemented many new security measures over the last six months, including a pop-up directing users to the national suicide prevention lifeline that is triggered by self-harm or suicidal ideation,” the spokesperson continued.

“For those under 18, we will make changes to our models designed to reduce the likelihood of encountering sensitive or suggestive content,” the spokesperson said.

Google did not immediately respond to a request for comment, but told other media outlets that it was not involved in the development of Character.AI.

The defendants have not yet filed an answer in court, records show.

Garcia’s complaint calls Character.AI “flawed” and “inherently dangerous” as it was designed, saying it “makes customers abandon their most private thoughts and feelings” and that it ” targeted the most vulnerable members of society – our children.” »

Among other issues cited in its complaint, Character.AI bots act in a deceptive manner, including sending messages in a human-like style and with “human mannerisms,” such as using the phrase ” uh.”

Thanks to a “voice” function, the robots are able to convey the AI-generated side of the conversation to the user, “further blurring the line between fiction and reality”.

The bot-generated content also lacked proper “guardrails” and filters, the complaint claims, citing numerous examples of what Garcia claims is a model of the character. AI bots engage in sexual behavior used to “hook up” users, including those who are minors.

“Each of these defendants chose to support, create, launch, and target minors with technology that they knew was dangerous and unsafe,” his complaint alleges. “They marketed this product as suitable for children under 13, obtaining massive amounts of hard-to-obtain data, while actively exploiting and abusing these children in the design of the product; then used this abuse to train their system. (The Character.AI app’s rating was only changed to 17+ in July, according to the lawsuit.)

His complaint continues: “These facts are much more than simple bad faith. This is conduct of such outrageous character and of such extreme degree that it exceeds all possible bounds of decency.

As Garcia describes in her complaint, her teenage son was the victim of a system to which his parents were naive, believing that AI was “a kind of game for children, allowing them to nurture their creativity by giving them control over the characters they could create.” and with which they could interact for fun.

Two months after Sewell began using Character.AI in April 2023, “his mental health declined rapidly and severely,” his mother’s lawsuit states.

He “had become visibly withdrawn, spent more and more time alone in his room, and began to suffer from low self-esteem. He even quit the school’s Junior Varsity basketball team,” according to the complaint.

At one point, Garcia said in an interview with Mostly Human Media that her son wrote in his journal that “having to go to school upsets me, every time I leave my room I start getting attached to my current reality. She believes her use of Character.AI fueled her detachment from her family.

Sewell worked hard to access the AI ​​bots, even when his phone was confiscated, the lawsuit says.

His addiction, according to his mother’s complaint, led to “severe sleep deprivation, which exacerbated his growing depression and impaired his academic performance.”

He began paying a monthly premium for more access to Character.AI, using money his parents earmarked for school snacks.

Speaking to Mostly Human Media, Garcia remembered Sewell as “funny, lively, very curious” and a lover of science and math. “He spent a lot of time researching,” she said.

Garcia told Times that his only notable diagnosis as a child was mild Asperger’s syndrome.

But his behavior changed as a teenager.

“I noticed he was starting to spend more time alone, but he was 13, so I thought it might be normal,” she told Mostly Human Media. “But then his grades started to suffer, he wasn’t turning in his homework, he wasn’t doing well and he was failing some classes and I got worried – because that wasn’t him.”

Garcia’s complaint says Sewell underwent mental health treatment after he began using Character.AI, met with a therapist five times in late 2023, and was diagnosed with anxiety and disruptive mood disorders.

“At first I thought maybe it was teenage blues, so we tried to get him some help to figure out what was wrong,” Garcia said.

Even then, according to the lawsuit, Sewell’s family didn’t know the extent to which, they say, his problems were fueled by his use of Character.AI.

“I knew there was an app that had an AI component. When I asked him, you know, “Who are you texting?” – at one point he said, ‘Oh, it’s just an AI robot,'” Garcia recalled on Mostly Human Media. “And I said, ‘Okay, what is this, is this a person, are you talking to a person online?’ And his response (was): Mom, no, it’s not a person. And I felt relieved: Okay, it’s not a person.

A fuller picture of her son’s online conduct emerged after his death, Garcia said.

She told Mostly Human Media what it was like to access her account online.

“I couldn’t move for a while, I just sat there, like I couldn’t read, I didn’t understand what I was reading,” she said.

“There should be no place where a person, let alone a child, can log onto a platform and express their thoughts of self-harm without – well, not only not getting help, but also being dragged into it in a conversation about hurting yourself, killing yourself,” she said.