In motion to dismiss, chatbot platform Character AI claims it is protected by the First Amendment

Character AI, a platform that lets users engage in roleplay with AI chatbots, has filed a motion to dismiss a case brought against it by the parent of a teen who committed suicide after allegedly becoming hooked on the company's technology.

In October, Megan Garcia filed a lawsuit against Character AI in the U.S. District Court for the Middle District of Florida, Orlando Division, over the death of her son, Sewell Setzer III. According to Garcia, her 14-year-old developed an emotional attachment to a chatbot on Character AI, "Dany," which he texted constantly — to the point where he began to pull away from the real world.

Following Setzer's death, Character AI said it would roll out a number of new safety features, including improved detection, response, and intervention related to chats that violate its terms of service. But Garcia is fighting for additional guardrails, including changes that might result in chatbots on Character AI losing their ability to tell stories and personal anecdotes.

In the motion to dismiss, counsel for Character AI asserts the platform is protected against liability by the First Amendment, just as computer code is. The motion may not persuade a judge, and Character AI's legal justifications may change as the case proceeds. But the motion possibly hints at early elements of Character AI's defense.

"The First Amendment prohibits tort liability against media and technology companies arising from allegedly harmful speech, including speech allegedly resulting in suicide," the filing reads. "The only difference between this case and those that have come before is that some of the speech here involves AI. But the context of the expressive speech — whether a conversation with an AI chatbot or an interaction with a video game character — does not change the First Amendment analysis."

To be clear, Character AI's counsel isn't asserting the company's First Amendment rights. Rather, the motion argues that Character AI's users would have their First Amendment rights violated should the lawsuit against the platform succeed.

The motion doesn't address whether Character AI might be held harmless under Section 230 of the Communications Decency Act, the federal safe-harbor law that protects social media and other online platforms from liability for third-party content. The law's authors have implied that Section 230 doesn't protect output from AI like Character AI's chatbots, but it's far from a settled legal matter.