ChatGPT Is Listening to You (and Might Report You)

The OpenAI case redefines the boundary between public safety and digital privacy: the challenge of protecting society without betraying users' trust. Between technological promises and regulatory gray areas, trust in AI remains a gamble. Digital whispers in a world that is always listening.

The Great Change: OpenAI Admits to Reporting to Authorities

In September 2025, OpenAI made a revelation that shook the global tech community: ChatGPT actively monitors user conversations and reports potentially criminal content to law enforcement agencies.

The news, which emerged almost by chance in a company blog post, revealed that when automated systems detect users who "are planning to harm others," conversations are routed to specialized pipelines where a small team trained in usage policies reviews them. If human reviewers determine that there is an "imminent threat of serious physical harm to others," the case may be referred to law enforcement.

ChatGPT cordially invites you to share your innermost thoughts. Don't worry, everything is confidential... more or less.

Sources:

The Contrast with "Protected" Professions

The Privilege of Professional Secrecy

When we talk to a psychologist, lawyer, doctor, or priest, our words are protected by a well-established legal mechanism: professional secrecy. This principle, rooted in centuries of legal tradition, establishes that certain conversations are inviolable, even in the face of criminal investigations.

Characteristics of traditional professional secrecy:

  • Extensive protection: Communications remain confidential even in the presence of confessed crimes
  • Limited and specific exceptions: Only in extreme cases defined by law may/must certain professionals break confidentiality.
  • Qualified human control: The decision to breach confidentiality always remains in the hands of a trained professional.
  • Ethical responsibility: Professionals are bound by codes of ethics that balance duties to the client and society.

The Real Limits of Professional Secrecy

Contrary to common perception, professional secrecy is not absolute. There are well-defined exceptions that vary by professional category:

For lawyers (Art. 28 of the Code of Conduct for Lawyers): Disclosure is permitted when necessary for:

  • The conduct of defense activities
  • Preventing the commission of a particularly serious crime
  • Defending yourself in a dispute against your client
  • Disciplinary proceedings

Critical example: If a client tells their lawyer that they intend to commit murder, the protection of life must take precedence over the protection of the right to defense, and the lawyer is released from their duty of confidentiality.

For Psychologists (Art. 13 Code of Ethics): Confidentiality may be breached when:

  • There is an obligation to report or file a complaint for crimes that are prosecutable ex officio.
  • There are serious dangers to the life or physical and mental health of the individual and/or third parties.
  • There is valid and demonstrable consent from the patient.

Important distinction: Private psychologists have greater discretion than public psychologists, who, as public officials, have more stringent reporting obligations.

Sources:

AI as a "Non-Professional"

ChatGPT operates in a completely different gray area:

Lack of legal privilege: Conversations with AI do not enjoy any legal protection. As Sam Altman, CEO of OpenAI, admitted: "If you talk to a therapist or a lawyer or a doctor about those issues, there is legal privilege for that. There's doctor-patient confidentiality, there's attorney-client privilege, whatever. And we haven't figured that out yet for when you talk to ChatGPT. "

Automated process: Unlike a human professional who evaluates each case individually, ChatGPT uses algorithms to identify "problematic" content, removing qualified human judgment from the initial screening stage.

Sources:

The Practical Implications: A New Surveillance Paradigm

The Paradox of Technological Trust

The situation creates a worrying paradox. Millions of people use ChatGPT as a digital confidant, sharing intimate thoughts, doubts, fears, and even criminal fantasies that they would never share with a human being. As Sam Altman reports: "People talk about the most personal things in their lives to ChatGPT. People use it—especially young people—as a therapist, life coach. "

The risk of criminal self-censorship: The awareness that conversations may be monitored could paradoxically:

  • Pushing criminals toward more hidden channels
  • Preventing people with violent thoughts from seeking help
  • Creating a "cooling" effect in digital communications

Expertise vs. Algorithms: Who Decides What Is Criminal?

A crucial aspect highlighted by critics concerns the competence of those who make the final decisions.

Human professionals have:

  • Years of training to distinguish between fantasies and real intentions
  • Codes of ethics that define when to break confidentiality
  • Personal legal responsibility for their decisions
  • Ability to assess context and credibility

The ChatGPT system operates with:

  • Automated algorithms for initial detection
  • OpenAI staff who do not necessarily have clinical or criminological training
  • Non-public and potentially arbitrary evaluation criteria
  • Absence of external control mechanisms

Problem example: How does an algorithm distinguish between:

  • A person who writes thrillers and seeks inspiration for violent scenes
  • Someone who fantasizes without any intention of acting on it
  • An individual who actually plans a crime

Sources:

Sources:

The OpenAI Contradiction: Privacy vs. Security

The Double Standard

OpenAI's admission creates a stark contradiction with its previous positions. The company has strongly resisted requests for user data in lawsuits, citing privacy protection. In the case against the New York Times, OpenAI argued strenuously against the disclosure of chat logs to protect user privacy.

The irony of the situation: OpenAI defends user privacy in court while simultaneously admitting to monitoring and sharing data with external authorities.

The Impact of the New York Times Case

The situation has been further complicated by a court order requiring OpenAI to retain all ChatGPT logs indefinitely, including private chats and API data. This means that conversations users believed to be temporary are now permanently archived.

Sources:

Possible Solutions and Alternatives

Towards an "AI Privilege"?

As suggested by Sam Altman, it may be necessary to develop a concept of "AI privilege" —legal protection similar to that offered to traditional professionals. However, this raises complex questions:

Possible regulatory options:

  1. Licensing Model: Only certified AIs can offer "conversational privilege."
  2. Mandatory Training: Those who handle sensitive content must have specific qualifications.
  3. Professional Supervision: Involvement of qualified psychologists/lawyers in reporting decisions
  4. Algorithmic Transparency: Publication of the criteria used to identify "dangerous" content

Intermediate Technical Solutions

"Compartmentalized" AI:

  • Separate systems for therapeutic vs. general use
  • End-to-end encryption for sensitive conversations
  • Explicit consent for all types of monitoring

"Tripartite" approach:

  • Automatic detection only for immediate and verifiable threats
  • Mandatory qualified human review
  • Appeal process for contested decisions

The Precedent of Digital Professionals

Lessons from other sectors:

  • Telemedicine: Developed protocols for digital privacy
  • Online legal advice: Use encryption and identity verification
  • Digital therapy: Specialized apps with specific protections

Sources:

What AI Means for Businesses

Lessons for the Sector

The OpenAI case sets important precedents for the entire artificial intelligence industry:

  1. Mandatory transparency: AI companies will have to be more explicit about their monitoring practices.
  2. Need for ethical frameworks: Clear regulations are needed on when and how AI can interfere with private communications.
  3. Specialized training: Those who make decisions about sensitive content must have appropriate skills.
  4. Legal liability: Defining who is responsible when an AI system makes an incorrect assessment

Operational Recommendations

For companies developing conversational AI:

  • Implement multidisciplinary teams (lawyers, psychologists, criminologists)
  • Develop public and verifiable criteria for reporting
  • Create appeal processes for users
  • Investing in specialized training for audit staff

For companies that use AI:

  • Assess privacy risks prior to implementation
  • Clearly inform users about the limits of confidentiality
  • Consider specialized alternatives for sensitive uses

The Future of Digital Confidentiality

The central dilemma: How to balance the prevention of real crimes with the right to privacy and digital confidentiality?

The issue is not merely technical but touches on fundamental principles:

  • Presumption of innocence: Monitoring private conversations implies generalized suspicion
  • Right to privacy: Includes the right to have private thoughts, even disturbing ones.
  • Preventive effectiveness: There is no evidence that digital surveillance actually prevents crime.

Conclusions: Finding the Right Balance

OpenAI's revelation marks a watershed moment in the evolution of artificial intelligence, but the question is not whether the report is right or wrong in absolute terms: it is how to make it effective, fair, and respectful of rights.

The need is real: concrete threats of violence, plans for attacks or other serious crimes require intervention. The question is not whether to report, but how to do so responsibly.

The fundamental differences to be resolved:

Training and Expertise:

  • Human professionals have established protocols for distinguishing between real threats and fantasies.
  • AI systems require equivalent standards and qualified oversight
  • Specialized training is needed for those who make final decisions.

Transparency and Control:

  • Professionals operate under the supervision of professional associations.
  • OpenAI requires public criteria and external control mechanisms
  • Users must know exactly when and why they might be reported.

Proportionality:

  • Professionals balance confidentiality with security on a case-by-case basis.
  • AI systems must develop similar mechanisms, not binary algorithms.

For companies in the sector, the challenge is to develop systems that effectively protect society without becoming tools for indiscriminate surveillance. User trust is essential, but it must coexist with social responsibility.

For users, the lesson is twofold:

  1. Conversations with AI do not have the same protections as traditional professionals.
  2. This is not necessarily negative if done in a transparent and proportionate manner, but it is important to be aware of it.

The future of conversational AI requires a new framework that:

  • Recognize the legitimacy of crime prevention
  • Establish professional standards for those who manage sensitive content
  • Ensure transparency in decision-making processes
  • Protect individual rights without ignoring security

The right question is not whether machines should report crimes, but how we can ensure that they do so (at least) with the same wisdom, training, and responsibility as human professionals.

The goal is not to return to AI that is "blind" to real dangers, but to build systems that combine technological efficiency with ethics and human expertise. Only then can we have the best of both worlds: security and protected individual rights.

References and Sources

  1. Futurism - "OpenAI Says It's Scanning Users' ChatGPT Conversations and Reporting Content to the Police"
  2. Puce Law Firm - "Attorney-Client Privilege"
  3. The Law for All - "Should a psychologist who knows about a crime report the patient?"
  4. TechCrunch - "Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist"
  5. Shinkai Blog - "OpenAI's ChatGPT Conversations Scanned, Reported to Police, Igniting User Outrage and Privacy Fears"
  6. Simon Willison - "OpenAI slams court order to save all ChatGPT logs, including deleted chats"
  7. Success Knocks - "OpenAI Lawsuit 2025: Appeals NYT Over ChatGPT Data"