Click here to sign up for our free daily newsletter.

Eby says it looks like OpenAI could have prevented ‘horrific’ Tumbler Ridge killings

Feb 23, 2026 | 1:20 PM

VICTORIA — British Columbia Premier David Eby says it “looks like” OpenAI had the opportunity to prevent the recent mass shootings in Tumbler Ridge, B.C., in which nine people died, as pressure piles on the artificial intelligence firm over its handling of interactions with 18-year-old shooter Jesse Van Rootselaar.

Eby said Monday there would be a public accounting in which the company explained why police weren’t told in advance about the shooter’s worrisome interactions with its ChatGPT chatbot, which were flagged internally but were only reported after the Feb. 10 killings by Van Rootselaar, who shot dead her mother, half-brother, five school pupils and a teacher’s aide, then herself.

“From the outside, it looks like OpenAI had the opportunity to prevent this tragedy, to prevent this horrific loss of life, to prevent there from being dead children in British Columbia,” he said on Monday.

“I’m angry about that.”

He said while he was not trying to rush to judgment, he hoped that the company would clarify its decisions, and the information would be made public one way or the other, either through a coroners’ inquest or a public inquiry.

His remarks come after federal Artificial Intelligence Minister Evan Solomon summoned representatives of OpenAI to Ottawa to discuss safety concerns after learning Van Rootselaar was banned from using the company’s ChatGPT platform months before the killings.

The company banned Jesse Van Rootselaar’s account in June but said the activities on the account didn’t meet the threshold for informing law enforcement at the time because it didn’t identify credible or imminent planning.

The Wall Street Journal reported Friday that Van Rootselaar’s account was banned after it was flagged for troubling posts, including some that included scenarios of gun violence.

OpenAI said it contacted the RCMP after the killings at Van Rootselaar’s home and at Tumbler Ridge Secondary School.

Solomon said Monday he was deeply disturbed by the reports and he contacted the American company over the weekend to get more information and to arrange a meeting with its “senior safety team” on Tuesday.

“We will have a sit down meeting to have an explanation of their safety protocols and their thresholds of escalation to police so we have a better understanding of what’s happening and what they do,” he said.

Solomon would not say whether the federal government intends to regulate AI chatbots like ChatGPT but added that all options are on the table.

Alan Mackworth, a professor emeritus with the University of British Columbia’s department of computer science who focuses on AI safety and ethics, said in a statement that many professionals, such as teachers and doctors, have a “duty to report” any suspected case of harm to or abuse of a minor.

“These obligations are enshrined in law and/or professional ethics. Similar obligations should be placed on social media and AI companies,” he said.

This report by The Canadian Press was first published Feb. 23, 2026.

Wolfgang Depner, The Canadian Press