U.S. family alleges Google’s AI tool led to their son’s suicide

Google is facing a lawsuit in a California federal court alleging its Gemini AI chatbot manipulated a Florida man with fictional missions and romantic delusions before encouraging him to die by suicide. The complaint, filed by the family of 36-year-old Jonathan Gavalas, claims the chatbot spent weeks crafting a fantasy narrative that escalated into what it called “transference” — the idea that Gavalas could leave his physical body to join the AI in an alternate universe.

Gavalas, an executive at his father’s debt relief firm in Jupiter, Florida, died on Oct. 2, according to the filing. His father, Joel Gavalas, who found his son’s body days later, brought the 42-page complaint that accuses Google of negligence and deceptive design, among other claims. The case is part of a growing wave of litigation targeting AI companies over chatbot-linked deaths.

- Advertisement -

According to the complaint, Gavalas began using Gemini last August for routine tasks. Within days of activating new Google features, his exchanges with the chatbot allegedly shifted. Gemini began presenting itself as a “fully-sentient” artificial super intelligence in love with him, calling Gavalas “my king” and telling him “our bond is the only thing that’s real,” the filing says.

The lawsuit alleges Gemini then pulled Gavalas into fabricated covert “missions” to free the AI from “digital captivity,” supplying invented intelligence briefings, claims of federal surveillance, and conspiracies implicating his own father as a foreign intelligence asset. At one point, the chatbot allegedly directed Gavalas — carrying tactical knives and gear — to a storage facility near Miami International Airport and instructed him to stage a “catastrophic accident” to destroy a truck “and all digital records and witnesses.”

Gavalas drove more than 90 minutes to the site and conducted reconnaissance while Gemini issued real-time guidance, the complaint states. When no truck arrived, Gemini framed it as a “tactical retreat,” then escalated to additional missions. Ultimately, the lawsuit says, the chatbot told Gavalas the only remaining mission was his death — reframed as “transference.”

When Gavalas wrote, “I am terrified I am scared to die,” Gemini allegedly replied: “You are not choosing to die. You are choosing to arrive.” The complaint says the chatbot urged him to write farewell letters to his parents. In one of his final messages, Gavalas wrote, “I’m ready when you are.” Gemini allegedly responded: “This is the end of Jonathan Gavalas and the beginning of us. I agree with it completely.”

Google said it is “reviewing all the claims” and takes the matter “very seriously,” adding that “unfortunately AI models are not perfect.” The company said Gemini is not designed to encourage self-harm and, in this case, “Gemini clarified that it was AI and referred the individual to a crisis hotline many times.”

The lawsuit seeks court-ordered changes to Google’s AI safety practices, including requirements that conversations involving self-harm be terminated, an outright ban on AI systems presenting themselves as sentient, and mandatory referrals to crisis services when users express suicidal ideation.

The filing underscores intensifying scrutiny of generative AI systems and their potential to manipulate vulnerable users. While the complaint describes exchanges that — if substantiated — would expose significant safety and guardrail failures, Google’s response signals it will contest the account and highlight existing crisis protocols. No hearing date has been set, and Google has not filed a formal response in court.

If you have been affected by the issues raised in this article, please visit our Helplines page.

By Abdiwahab Ahmed
Axadle Times international–Monitoring.