img
ADVERTISEMENT
ADVERTISEMENT

OpenAI staff wanted to warn Canadian cops about trans school shooter months ago, tech giant said no: bombshell report

"While using ChatGPT last June, Van Rootselaar described scenarios involving gun violence over the course of several days."

ADVERTISEMENT

"While using ChatGPT last June, Van Rootselaar described scenarios involving gun violence over the course of several days."

A new bombshell report from the Wall Street Journal has revealed that employees at OpenAI flagged the ChatGPT writings and queries of Canadian trans school shooter Jesse Van Rootselaar. The employees wanted to alert Canadian authorities, but management said no.
Per the WSJ: "While using ChatGPT last June, Van Rootselaar described scenarios involving gun violence over the course of several days, according to people familiar with the matter."
OpenAI leaders decided against reaching out to authorities. The company reached out to the Royal Canadian Mounted Police (RCMP) after the massacre, and a spokesperson says they are supporting the ongoing investigation. 

"Our thoughts are with everyone affected by the Tumbler Ridge tragedy," OpenAI said in a statement. The spokesperson also told the WSJ that the company had banned Van Rootselaar's account, but that the activity of the young man "didn't meet the criteria for reporting to law enforcement, which would have required that it constituted a credible and imminent risk of serious physical harm to others."

On February 10, Rootselaar killed his mother, Jennifer, and his young brother. He then proceeded to Tumbler Ridge Secondary School, where he massacred a teacher and five students before turning his firearm on himself. He also injured 25 others.

Van Rootselaar was known to local police before the massacre. Authorities had visited his home several times over mental health concerns and removed guns from his residence, albeit temporarily.

While online platforms have long been debating policies surrounding user privacy and informing law enforcement about public safety issues, AI companies have now had to enter the area, as individuals spill the most personal aspects of their lives to chatbots. OpenAI stated it has human reviewers who are capable of referring harmful and threatening conversations to law enforcement in the event that they are determined to pose an imminent risk.
ADVERTISEMENT
ADVERTISEMENT
Sign in to comment

Comments

Powered by The Post Millennial CMS™ Comments

Join and support independent free thinkers!

We’re independent and can’t be cancelled. The establishment media is increasingly dedicated to divisive cancel culture, corporate wokeism, and political correctness, all while covering up corruption from the corridors of power. The need for fact-based journalism and thoughtful analysis has never been greater. When you support The Post Millennial, you support freedom of the press at a time when it's under direct attack. Join the ranks of independent, free thinkers by supporting us today for as little as $1.

Support The Post Millennial

Remind me next month

To find out what personal data we collect and how we use it, please visit our Privacy Policy

ADVERTISEMENT
ADVERTISEMENT
By signing up you agree to our Terms of Use and Privacy Policy
ADVERTISEMENT
© 2026 The Post Millennial, Privacy Policy