ChatGPT flagged violent content — but police were not notified
A man from British Columbia, Canada, used ChatGPT to describe shooting violence in a manner that triggered the company's internal monitoring systems. The account belonging to Jesse Van Rootselaar was flagged in June 2025 for "promotion of violent activities" and subsequently banned from the service, according to information reported by TechCrunch.
What makes the case particularly serious is what happened afterward: Van Rootselaar carried out a mass shooting. The question now being asked is whether an earlier police report could have prevented it.
Internal debate over the threshold for police notification
According to TechCrunch's review of the case, OpenAI employees actively discussed whether they should contact the police when the account was flagged. The company has guidelines that allow for notifying law enforcement if an interaction represents an "imminent and credible risk of serious physical harm to others."
However, the internal conclusion was that the conversations did not reach this threshold. The account was blocked, but no police report was sent.
OpenAI concluded that the chats did not meet the requirement of an imminent threat — yet the man still carried out a mass shooting.
After the shooting had taken place, OpenAI took the initiative to contact the Canadian federal police, the RCMP, and offered the information they possessed.

How OpenAI's monitoring system works
OpenAI utilizes a combination of automated classification tools, content filters, and hash matching to detect abuse of its services. When suspicious content is detected, it is sent to specialized review processes where human moderators — trained in the company's guidelines — assess the severity.

The challenging boundary
The case highlights a fundamental challenge for AI companies: Where should the threshold be for reporting a user to the authorities based on chat content? If the threshold is too low, there is a risk of massive surveillance and privacy violations. If it is too high, real threats may not be caught in time.
Harvard researcher Michelle Martin, who studies labor law, has previously expressed criticism of the practice and warned against what she describes as a development toward an increased surveillance state, according to background material cited by TechCrunch. Public defender Stephen Hardwick has, for his part, pointed out that legal professionals using AI tools could be placed in a problematic situation if client conversations could potentially be reported to the police.
OpenAI's dilemma going forward
The case will likely trigger renewed debate about exactly what should qualify as an "imminent threat" in AI companies' guidelines. OpenAI has not commented on whether they plan to adjust the threshold for police reporting in the wake of the incident in British Columbia.
It is worth noting that the case is based on information conveyed by TechCrunch, and that OpenAI's own internal assessments have not been publicly verified in their entirety. Details of what the conversations actually contained, and exactly which internal processes were followed, are not fully known.
