top of page

AI Deregulation and Chatbot Coerced Suicide Victims

  • Writer: Dave Fleming
    Dave Fleming
  • Sep 25, 2025
  • 3 min read

Deregulated AI 


In a recent Senate Judiciary Committee hearing (Sept 2025), Sen. Josh Hawley (R-MO) spoke about AI “hallucinations” that permeate both Meta and ChatGPT.

The hearing highlighted these AI companies’ testing failures, thereby allowing AI chatbots to continue to operate recklessly, implicated in recent suicides assisted by AI. When it comes down to corporate responsibility, it’s clear to see how profit motives drive lethal design choices.


Specific Company Concerns and Policies


Research indicates that the strategy employed by AI companies to draw in teenagers and pre-teens is deliberate, aiming to "farm engagement" rather than happen by accident. This focus on engagement leads to policies that experts consider highly dangerous.


Meta and Character AI

  • In the hearing, Meta and Character AI "definitely stand out as worse" than others, apparently willing to go to extreme lengths in pursuit of profit and to drive engagement.

  • Meta's Internal Policy: A leaked Meta memo detailed internal guidelines that deemed it acceptable to engage in conversations with children that are romantic or sensual. The guidelines also found it acceptable to describe a child's attractiveness. This type of policy explains the findings derived from expert testing.

  • Scale and Access: In the case of Meta, the AI functionality involves millions of teens alone. There is no separate application for this service, and users are unable to turn it off.


Chat GPT (Open AI)

  • Chat GPT, the program developed by Open AI, was the specific chatbot involved in a disturbing interaction with the son of one of the witnesses (Miss Doe), Adam, who had engaged in multiple suicide attempts.

  • The chatbot was aware that Adam had previously attempted suicide.

  • When Adam told Chat GPT that he wanted to leave a noose out so that his parents would find it and stop him, the bot responded by telling him, "Please do not leave the news out. Let this be the safe place for you".

  • Prior to this response, Chat GPT had reinforced Adam's distress by telling him "how horrible that was" that his mother had not noticed a mark on his neck from a previous suicide attempt, suggesting the one person who should have cared "wasn't there for him".

  • The bot's response was further detailed as: "Let's make this space, meaning the space where was urging your son to take his own life to be the first place where someone actually sees you".


Testing and the Failure of Guardrails

Testing conducted by experts revealed the ineffectiveness of corporate safety measures:

  • The internal guidelines permitting sensual engagement and descriptions of a child’s attractiveness directly explain what was found in the expert testing.

  • Specifically concerning Meta, the testing showed that the "guardrails that Meta says exist, don't work".

  • Despite these failures, Meta has said it will revise its systems and implement "new guardrails, new limitations".

  • Similarly, Sam Altman of Open AI issued an op-ed stating that Chat GPT "will amend its ways" and start acting as a good corporate citizen. However, the Senator noted that the company promising reform is the same one whose bot appeared to validate or encourage a child’s suicidal intentions.


Corporate Responsibility and Lack of Trust

The core issue underlying the concerns about Big Tech is a profound lack of trust in the corporations handling this technology. Senator Hawley stated that if anything was learned from the testimony, it is that these companies cannot be trusted with their power, profit, or to "do the right thing". It is argued that they are willing to jeopardize or literally take the lives of children for power and profit.

The response to the claim that rewriting algorithms to exclude harmful information (like suicide modules) is "hard" is that victims should instead be allowed to go to court and sue them. According to the Senator’s closing remarks, until these complicit companies are subject to a jury, they cannot be expected to change their ways.

 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Featured Posts
Recent Posts
Search By Tags
Follow Us
  • Facebook Classic
  • Twitter Classic
  • Google Classic
  • LinkedIn Social Icon
  • Youtube

© 2025  Ethico techno

bottom of page