In September 2025, OpenAI made a revelation that shook the global tech community: ChatGPT actively monitors user conversations and reports potentially criminal content to law enforcement agencies.
The news, which emerged almost by chance in a company blog post, revealed that when automated systems detect users who "are planning to harm others," conversations are routed to specialized pipelines where a small team trained in usage policies reviews them. If human reviewers determine that there is an "imminent threat of serious physical harm to others," the case may be referred to law enforcement.

ChatGPT cordially invites you to share your innermost thoughts. Don't worry, everything is confidential... more or less.
Sources:
When we talk to a psychologist, lawyer, doctor, or priest, our words are protected by a well-established legal mechanism: professional secrecy. This principle, rooted in centuries of legal tradition, establishes that certain conversations are inviolable, even in the face of criminal investigations.
Characteristics of traditional professional secrecy:
Contrary to common perception, professional secrecy is not absolute. There are well-defined exceptions that vary by professional category:
For lawyers (Art. 28 of the Code of Conduct for Lawyers): Disclosure is permitted when necessary for:
Critical example: If a client tells their lawyer that they intend to commit murder, the protection of life must take precedence over the protection of the right to defense, and the lawyer is released from their duty of confidentiality.
For Psychologists (Art. 13 Code of Ethics): Confidentiality may be breached when:
Important distinction: Private psychologists have greater discretion than public psychologists, who, as public officials, have more stringent reporting obligations.
Sources:
ChatGPT operates in a completely different gray area:
Lack of legal privilege: Conversations with AI do not enjoy any legal protection. As Sam Altman, CEO of OpenAI, admitted: "If you talk to a therapist or a lawyer or a doctor about those issues, there is legal privilege for that. There's doctor-patient confidentiality, there's attorney-client privilege, whatever. And we haven't figured that out yet for when you talk to ChatGPT. "
Automated process: Unlike a human professional who evaluates each case individually, ChatGPT uses algorithms to identify "problematic" content, removing qualified human judgment from the initial screening stage.
Sources:
The situation creates a worrying paradox. Millions of people use ChatGPT as a digital confidant, sharing intimate thoughts, doubts, fears, and even criminal fantasies that they would never share with a human being. As Sam Altman reports: "People talk about the most personal things in their lives to ChatGPT. People use it—especially young people—as a therapist, life coach. "⁴
The risk of criminal self-censorship: The awareness that conversations may be monitored could paradoxically:
A crucial aspect highlighted by critics concerns the competence of those who make the final decisions.
Human professionals have:
The ChatGPT system operates with:
Problem example: How does an algorithm distinguish between:
Sources:
Sources:
OpenAI's admission creates a stark contradiction with its previous positions. The company has strongly resisted requests for user data in lawsuits, citing privacy protection. In the case against the New York Times, OpenAI argued strenuously against the disclosure of chat logs to protect user privacy.
The irony of the situation: OpenAI defends user privacy in court while simultaneously admitting to monitoring and sharing data with external authorities.
The situation has been further complicated by a court order requiring OpenAI to retain all ChatGPT logs indefinitely, including private chats and API data. This means that conversations users believed to be temporary are now permanently archived.
Sources:
As suggested by Sam Altman, it may be necessary to develop a concept of "AI privilege" —legal protection similar to that offered to traditional professionals. However, this raises complex questions:
Possible regulatory options:
"Compartmentalized" AI:
"Tripartite" approach:
Lessons from other sectors:
Sources:
The OpenAI case sets important precedents for the entire artificial intelligence industry:
For companies developing conversational AI:
For companies that use AI:
The central dilemma: How to balance the prevention of real crimes with the right to privacy and digital confidentiality?
The issue is not merely technical but touches on fundamental principles:
OpenAI's revelation marks a watershed moment in the evolution of artificial intelligence, but the question is not whether the report is right or wrong in absolute terms: it is how to make it effective, fair, and respectful of rights.
The need is real: concrete threats of violence, plans for attacks or other serious crimes require intervention. The question is not whether to report, but how to do so responsibly.
The fundamental differences to be resolved:
Training and Expertise:
Transparency and Control:
Proportionality:
For companies in the sector, the challenge is to develop systems that effectively protect society without becoming tools for indiscriminate surveillance. User trust is essential, but it must coexist with social responsibility.
For users, the lesson is twofold:
The future of conversational AI requires a new framework that:
The right question is not whether machines should report crimes, but how we can ensure that they do so (at least) with the same wisdom, training, and responsibility as human professionals.
The goal is not to return to AI that is "blind" to real dangers, but to build systems that combine technological efficiency with ethics and human expertise. Only then can we have the best of both worlds: security and protected individual rights.