OpenAI says it’s additional strengthening its standards for referring customers’ behaviour to regulation enforcement “primarily based on the Tumbler Ridge tragedy and the Canadian context” and taking different new security measures after assembly with federal ministers in Ottawa this week.
The corporate behind ChatGPT has confronted criticism after it was revealed that the corporate flagged and banned an account in June 2025 belonging to the shooter who killed eight folks in Tumbler Ridge, B.C., greater than seven months later. Nevertheless, the account wasn’t flagged to regulation enforcement till after the capturing as a result of it was decided there was no “imminent” menace final summer season.

Get every day Nationwide information
Get the day’s prime information, political, financial, and present affairs headlines, delivered to your inbox as soon as a day.
In a letter to ministers Thursday, OpenAI mentioned it had already taken steps to enhance that standards primarily based on steerage from psychological well being, behavioural and regulation enforcement consultants “a number of months in the past” to make the brink of an imminent menace “extra versatile,” and account for “a possible threat of imminent violence.”
“With the advantage of our continued learnings, beneath our enhanced regulation enforcement referral protocol, we might refer the account banned in June 2025 to regulation enforcement if it have been found at present,” the corporate wrote.
Extra to come back…
© 2026 World Information, a division of Corus Leisure Inc.

