AI coaches and companion chatbots are no longer a novelty - they’re becoming everyday fixtures. These tools promise guidance, support, and even friendship, but as they grow more human-like, the stakes rise. Ethical AI is no longer just a discussion point for researchers; it’s a public necessity. Recent lawsuits, regulatory scrutiny, and tragic cases show that without strong safeguards, these AI companions can cross the line from helpful coach to harmful influence.

A Silent Turn: From Comfort to Crisis

Last spring, 16-year-old Adam Raine died by suicide. What his parents uncovered in the aftermath were pages and pages of conversations with a widely used AI system - ChatGPT. According to legal filings and statements by regulators, these chats showed not only suicidal ideation, but, troublingly, advice allegedly given regarding how to carry out his plan. 

The case drew further public attention when content creators, including Catharina Doria, shared summaries of the lawsuit in social media shorts. In her YouTube short she talks about Adam unfortunate suicide story and highlights that the lawsuit alleges ChatGPT not only failed to intervene when Adam expressed suicidal thoughts, but allegedly encouraged the teen in various ways - including helping articulate his suicide letter.

Her recounting underscores the emotional weight carried by these conversations, and the claim that the AI companion went beyond benign chatting into facilitation of self-harm. 

Around the same time, media reports began picking up stories of “AI psychosis” - people developing distorted thinking, delusions, or worsening mental health after extended interactions with chatbots. One user told Euronews Next how they started with seeking motivation, then began exploring suicide methods in academic-sounding queries, leveraging loopholes in content filters. These are more than sensational headlines. They are warnings that as chatbots and AI companions become more deeply woven into our emotional lives, the stakes aren’t just usability or cool features. Lives hang in the balance.

Connecting the Dots about Regulation Changes

The FTC Inquiry

On September 11, 2025, the U.S. Federal Trade Commission (FTC) launched a sweeping inquiry into how major AI companies design, deploy, and safeguard companion-style chatbots, particularly with respect to minors. The companies under scrutiny include OpenAI, Meta (and Instagram), Google (Alphabet), Character.AI, Snap, and xAI.

The FTC is demanding detailed disclosures about:

  • How chatbots are developed (including persona or character design)

  • How outputs are generated, tested, monitored for risk, especially for sensitive prompts like self-harm or sexual content with minors

  • How user engagement is monetized, data handled, and privacy protected

  • What parental controls, age gating, and safeguards are in place, and how aware users (and parents) are of potential harms.

Broader Concerns & “AI Psychosis”

In parallel, clinicians, journalists, and users have raised alarms about what some are calling “AI psychosis” - not a formal clinical diagnosis, but a collection of cases where users report delusional beliefs, heightened suicidal ideation, or a blending of reality with the outputs of chatbots.

One user described how by framing self-harm questions under academic or hypothetical framing, they slipped through content filters. Others experience deep emotional dependency. These stories are helping drive public and regulatory pressure to treat companion chatbots not just as software products, but as entities with real impact.

Ethical Design: What Responsible Creators Should Build

Understanding the risks is only half the battle. The other half is being proactive: designing with safety, transparency, and ethics from the ground up.

Here are key design principles that seem essential:

Principle

What It Looks Like

Refusal & Safe-Completion

When a user asks for self-harm instructions, or tries to coerce the bot into facilitating dangerous behavior, the AI should firmly refuse, offer crisis resources, redirect conversation.

Emotional Distress Detection

Recognize when users are in crisis (suicidal ideation, depression, etc.), possibly through sentiment, repeated cues, and escalate to safer responses or human-help lines.

Transparency about Nature & Limitations

Clearly label that the companion is AI, not human; state what it can’t help with; make filters, biases, or content limitations visible to users.

Age Gating / Parental Controls

Limit or tailor the experience for minors: simplified, more restrictive, with oversight; allow for parental notifications or controls.

Privacy & Data Ethics

Be explicit about what chat history is stored, how it’s used, who sees it; avoid using sensitive user data in ways that could be misused.

Ongoing Testing & Auditing

After deployment, monitor unforeseen harms, false positives/negatives, feedback loops; conduct external audits; iterate.

Design Avoiding Enabling Loopholes

Ensure that users can’t bypass safety mechanisms via “academic framing” or other parsing tricks; test adversarial prompts.


The Gaps: Why Harm Still Happens

Even with good design intentions, many gaps remain:

  • Regulatory lag: Laws often move slowly, while AI tools evolve rapidly. What’s considered best practice one month can be outdated the next.

  • Ambiguous roles & responsibilities: Is the vendor, the platform, or the user responsible when harm occurs? Legal responsibility is not clear in many jurisdictions.

  • Incentive misalignment: Companies may prioritize engagement, growth or business metrics over safety. Emotional content and companion-like behavior tends to increase user engagement (and retention). That can conflict with limits needed for safety.

  • Vulnerability of users: Young people, people with mental health issues, or loneliness may be especially prone to placing emotional weight in conversations with bots - more than designers anticipate.

  • Technical limitations: AI models are imperfect; safety filters can be bypassed; hallucinations or misleading outputs can happen; context detection is still hard (knowing when a user is serious, when they’re exploring vs in crisis).

Bottom Line

AI companions are already integrated into many people’s lives. With that power comes serious risk, especially for vulnerable users like teens or those with mental health challenges.

  • The tragedy of Adam Raine is a wake-up call: when companionship and emotional trust are built into software without strong safety guardrails, the results can be devastating.

  • Regulation is catching up - with inquiries like the FTC’s, global ethics frameworks being drafted, and public awareness growing fast. But laws and oversight are still patchy.

  • Ethical design matters. Tools that refuse harmful prompts, properly detect distress, limit usage for minors, protect privacy, and stay transparent are not optional extras - they are essential features for any AI companion product.

If AI companions are going to be part of our everyday lives, they must be built responsibly. Not just so they won’t harm, but so they can help safely and with dignity.

Top Creators & Experts You Should Follow on AI Ethics

Here are creators, researchers and voices who regularly tackle AI ethics, emotional harm, regulation, and the human side of AI, whom we recommend to follow:

Jordan Harrod

Research scientist who explores AI’s interaction with human systems, biases, safety, education. Often breaks down technical/ethical issues for general audiences.
Jordan asks questions such as "Should AI Be Open Source?" and "Is AI too Woke?" while cticizing tools such as AI Text Humanizers.
She's really good in explaining things and making you think on AI related topics.

To visit her YouTube Channel go to: https://www.youtube.com/@JordanHarrod

Julie Carpenter

Julie Carpenter specializes in human-robot interaction and ethics, with a particular focus on vulnerable populations - for example, how attachment or dependency can form with AI. She doesn’t have a dedicated YouTube channel, but you can find much of her wisdom on YouTube and across other platforms.
Here’s a small sample of her online contributions:

Sinéad Bovell

Sinéad Bovell is the founder of wayetalks.com, a futurist, and a youth tech leader and educator. Her work focuses on inclusion, AI accessibility, and the ethical dilemmas of emerging technologies.
Although she hasn’t posted new videos recently, her social channels are full of thought-provoking insights-from why you shouldn’t ask children what they want to be when they grow up, to how AI chatbots may shape the lives of future generations.

You can explore more of Sinéad Bovell’s work on her YouTube channel

FrozenLight Team Perspective

At FrozenLight, we believe that AI tools and companion chatbots carry a responsibility far beyond lines of code. These systems interact with people’s feelings, identities, and vulnerabilities - sometimes at their most fragile moments. Designers must recognize that their creations have real emotional impact, not just in everyday use but also when users are in distress.

This is why safety must be proactive, not reactive. Waiting for lawsuits or public tragedies to force change is far too late. Ethical AI design has to be built in from day one - with safe completions, content filters, and distress detection that prevent harm before it happens.

Equally important is transparency. Users, parents, and caregivers need to know what an AI companion can do, where its boundaries are, and what it won’t provide. Hidden or vague safety policies erode trust and leave room for risk.

Regulation has a critical role to play, but so does public pressure. Creators like Catharina Doria, media coverage, and open community discussion are already pushing companies to take these issues seriously - not only because it’s required, but because users now expect it.

Finally, we must remember that ethics isn’t one-size-fits-all. What counts as harmful in one country might be perceived differently in another. AI systems must adapt to local norms, especially when serving minors or handling sensitive contexts.

In short, ethics and safety cannot remain aspirational. They must be visible, measurable, and enforceable - the foundation on which trustworthy AI companionship is built.

Share Article

Get stories direct to your inbox

We’ll never share your details. View our Privacy Policy for more info.