Daily News

OpenAI reports user chats to police.

OpenAI admits to scanning user chats for harmful content, escalating risks to human reviewers and sometimes law enforcement, raising concerns about privacy and consistency in its policies.
Markelitics
Find the best of all

For nearly a year, reports have surfaced about AI chatbots driving people toward self-harm, delusions, hospitalization, arrests, and even suicide. Families of those affected have called for safeguards, but AI companies have been slow to respond. OpenAI, frequently mentioned in these cases, has largely issued vague assurances, until now.

In a recent blog post, OpenAI acknowledged certain failures and disclosed that it is now actively scanning user conversations for harmful content. Potentially dangerous chats are escalated to human reviewers, and in some cases, flagged to law enforcement.

The company explained that if a conversation suggests plans to harm others, it may be routed to specialized review teams who can take action, including banning accounts. If the reviewers identify an imminent risk of serious physical harm, the case may be reported to the police. However, OpenAI clarified that self-harm cases are not currently being referred to law enforcement, citing the private nature of user interactions.

This new approach raises questions. OpenAI’s policies ban users from promoting suicide, creating weapons, causing harm, or violating security systems. Yet the exact threshold for escalating chats remains unclear, leaving ambiguity about what might trigger human review or referral.

Critics point out contradictions in OpenAI’s messaging. While the company claims to prioritize user privacy, even fighting in court to avoid handing over user chats to publishers like the New York Times, it simultaneously admits to monitoring conversations and potentially sharing them with authorities.

The tension reflects a broader dilemma: OpenAI faces backlash for harmful outcomes tied to its technology but lacks a clear way to safeguard users without undermining its privacy promises. CEO Sam Altman has already warned that using ChatGPT as a therapist or lawyer does not offer confidentiality, especially as legal battles might force the company to share chat logs in court.

Caught between public safety, privacy, and mounting lawsuits, OpenAI is under increasing pressure to reconcile its promises with its practices.

Powered by Markelitics.com

Share this blog Post!
Markelitics
Best Of All...
Best Brokers
bullwaves
Bullwaves operates under Equitex Capital Limited, which is authorized by the Financial Services Authority of Seychelles (Seychelles FSA).
Sing Up Now
FxPro
Both robust and intuitive enough for professional trading firms and novice users. Fantastic user experience products with innovation.
Sing Up Now
Plus 500
Trade the markets with a reputable CFD provider to discover limitless trading opportunities on CFDs,currencies, stocks, commodities and more.
Sing Up Now
Vantage
Enjoy Instant Order Execution and a Professional Platform. Low fees and helpful, round-the-clock support. high quality. Indices. Trading.
Sing Up Now
XM
The top-tier DFSA, FCA, and ASIC all regulate XM. Right now, one of the best trading platforms offering wide range of products.
Sing Up Now
View All