img
ADVERTISEMENT
ADVERTISEMENT

OpenAI failed to disclose Canadian trans shooter’s ChatGPT history in meeting with officials day after shooting

OpenAI said that for a user’s submissions to trigger a law enforcement referral, they must indicate “an imminent and credible risk of serious physical harm to others.”

ADVERTISEMENT

OpenAI said that for a user’s submissions to trigger a law enforcement referral, they must indicate “an imminent and credible risk of serious physical harm to others.”

In a meeting with the British Columbia government, the day after a trans-identifying 18-year-old carried out a mass shooting in Tumbler Ridge, OpenAI did not disclose that it had been aware of concerning conversations the shooter had with its chatbot months prior.

The province said in a statement that OpenAI waited until the following day to ask its provincial contact to help connect the company with the Royal Canadian Mounted Police. OpenAI handed over evidence that the shooter was banned from using ChatGPT after its automated screening systems flagged his chats last June, according to a company statement.

A previous report found that some employees had wanted the company to alert police about posts involving gun violence, but were ignored.

The Globe and Mail reported that, according to a statement from Premier David Erby’s office, a government representative met with OpenAI employees on February 11 about its interest in opening a satellite office in Canada. The following day, OpenAI requested contact information from the RCMP.

“That request was sent to the director of policing and law-enforcement services, who connected OpenAI with the RCMP,” the statement said. “OpenAI did not inform any member of government that they had potential evidence regarding the shootings in Tumbler Ridge.”

The RCMP is continuing to investigate the attack carried out by Jesse Van Rootselaar, who killed eight people, including his mother and half-brother, before shooting himself. The victims included five children and a teacher at a local school.

In a statement on Friday, OpenAI said that for a user’s submissions to trigger a law enforcement referral, they must indicate “an imminent and credible risk of serious physical harm to others.” The company said it did not identify “credible or imminent planning” in the shooter’s chats last June.
ADVERTISEMENT
ADVERTISEMENT
Sign in to comment

Comments

Powered by The Post Millennial CMS™ Comments

Join and support independent free thinkers!

We’re independent and can’t be cancelled. The establishment media is increasingly dedicated to divisive cancel culture, corporate wokeism, and political correctness, all while covering up corruption from the corridors of power. The need for fact-based journalism and thoughtful analysis has never been greater. When you support The Post Millennial, you support freedom of the press at a time when it's under direct attack. Join the ranks of independent, free thinkers by supporting us today for as little as $1.

Support The Post Millennial

Remind me next month

To find out what personal data we collect and how we use it, please visit our Privacy Policy

ADVERTISEMENT
ADVERTISEMENT
By signing up you agree to our Terms of Use and Privacy Policy
ADVERTISEMENT
© 2026 The Post Millennial, Privacy Policy