Scrutiny over how OpenAI handled information about the Tumbler Ridge, B.C., mass shooter months before the deadly tragedy provides an opportunity for Canada to consider regulating artificial intelligence companies to inform police in similar scenarios, experts say.

The company behind ChatGPT confirmed last week it “proactively” identified and banned an account associated with Jesse Van Rootselaar in June 2025 for misusing the AI chatbot “in furtherance of violent activities.”

However, it did not inform police at that time because the activity did not meet the higher internal threshold of an “imminent” threat.

OpenAI ultimately contacted RCMP after police say 18-year-old Van Rootselaar killed eight people and wounded 25 others on Feb. 10, before taking her own life.

Artificial Intelligence Minister Evan Solomon summoned representatives to Ottawa on Tuesday to discuss the situation and the company’s safety practices.

Solomon told reporters Tuesday before the meeting that “all options are on the table when it comes to understanding what we can do about AI chatbots.”

Heritage Minister Marc Miller, whose ministry is working with Solomon’s to develop online safety legislation that would cover AI platforms, said the government is taking the time to get that bill right and wouldn’t tie it to what happened in Tumbler Ridge.

“I think there is the need to have legislation to make sure that platforms are behaving responsibly,” he said. “What that looks like is still to be determined, and I can’t discuss timelines with you on that.

“I think in this situation, there is legitimate thirst for easier answers, but I don’t think there are easy answers in this case, particularly with an open investigation. But … we need better answers than the ones we’ve gotten so far.”

Canada’s privacy legislation says private companies “may” — not must — disclose personal information to authorities or another organization if they believe there is a risk of significant harm or that a law will be broken.

Any further decision-making is up to the company itself, leading to internal thresholds like OpenAI’s “imminent” threat identification.

“This is yet another sign that there is a risk with letting OpenAI and other AI developers decide for themselves what is an appropriate safety framework,” said Vincent Paquin, an assistant professor of psychiatry at McGill University who researches the relationship between digital technologies and the mental health of young people.


“Ultimately, ChatGPT is a commercial product. It’s not an approved health-care device. And so it is concerning to see that there are increasing amount of people turning to ChatGPT and other AI products for mental health support and for sensitive discussions about things going on in their lives, without having a clear understanding of the safety of those interactions and the safety mechanisms that are in place.”

Get the day's top news, political, economic, and current affairs headlines, delivered to your inbox once a day.

Get daily National news

Get the day’s top news, political, economic, and current affairs headlines, delivered to your inbox once a day.

The revelations come as OpenAI and other AI chatbot makers face multiple lawsuits in the U.S. over allegations their platforms helped drive young people to suicide and self-harm.

OpenAI denies those allegations and says that its safety evaluations refuse most, if not all, requests for harmful content like hateful and violent rhetoric and advice, including suicidal ideation.

The Wall Street Journal, which first reported OpenAI’s prior knowledge of Van Rootselaar’s ChatGPT activity, said her posts “described scenarios involving gun violence over the course of several days,” according to people familiar with the matter.

The report said company employees were alarmed by the posts and wrestled with whether to alert police last summer, before the company opted not to.

Global News has not independently verified the details in the report.

The B.C. government said in a statement Saturday that OpenAI officials met with a government representative on Feb. 11 — the day after the shooting — for “a meeting scheduled weeks in advance” to discuss the possibility of opening OpenAI’s first Canadian office.

“OpenAI did not inform any member of government that they had potential evidence regarding the shootings in Tumbler Ridge,” the government said, but noted OpenAI requested contact information for the RCMP from the province on Feb. 12.

Canada’s privacy commissioner, Philippe Dufresne, has previously said not having a Canadian business office to contact makes it more difficult for his agency to investigate tech companies like TikTok.

Brian McQuinn, an associate professor at the University of Regina and co-director of the Centre for Artificial Intelligence, Data, and Conflict, said the tech industry in general has deprioritized internal safety regulation ever since Elon Musk took over Twitter in 2022, rebranding it as X.

“Basically (after he) fired all the teams doing that kind of work, the other (social media) companies sort of followed suit and realized they could get away with it, too,” he said. “So less staff overhead and fewer headaches being created by your own staff by letting you know things.

“If you don’t know, then you can’t be held responsible.”

Dufresne’s office has launched an investigation into Musk-owned xAI and its Grok chatbot, which is built into the X social media platform, over allegations it facilitated the spread of non-consensual sexualized deepfake images of women and children. Other companies and U.S. states are conducting similar probes.

Musk has criticized the investigations as attempts to stifle free speech and expression.

Sharon Bauer, a privacy lawyer and AI governance strategist based in Toronto, said it’s important for any future legislation or regulation to strike the “fine balance” between individual privacy with the duty to warn of potential threats.

She said the term “imminent” is key.

“That is a really important threshold, because anything lower than that threshold would mean that they would be notifying law enforcement of things that may end up stigmatizing people or creating false positives, which would of course harm those individuals,” she said.

At the same time, Bauer added, “anything too high would mean missing genuine threats, which may have been the case in this situation.”

“I’m hoping that we’ll get answers about this, if they documented their reasoning about why they didn’t contact law enforcement, and that’s going to be really important to analyze and figure out if they made that right decision,” she said.

McQuinn said he also wants to see data about who has been kicked off AI chatbot and social media platforms for threatening to harm themselves or others, and whether there was any real world follow-up on those individuals.

“If the answer’s no, then they are just putting their heads in the sand,” he said.

“These companies (are worth) trillions of dollars, so the amount of money they spend on anything related to staffing and safety is negligible.”

He added that Canada’s forthcoming AI strategy needs to pair economic benefits and adoption strategies with robust safety protocols that answer these critical questions.

Paquin cited a recent California law, which requires large AI companies like OpenAI to report to the state any instances of their platforms being used for potentially “catastrophic” activities, as something Canada should model its own potential regulation after.

However, that law defines a catastrophic risk as something that would cause at least $1 billion in damage or more than 50 injuries or deaths.

The law has been praised by some AI companies like Anthropic for balancing public safety with allowing continued “innovation.”

“We should ask for more transparency and we should also think about a way of having an external oversight over those activities, because we cannot let the AI developers be their own judge, the judge of their own safety,” Paquin said.

—with files from Global’s Touria Izri

Read the full article here

Share.
Leave A Reply

2026 © Prices.com LLC. All Rights Reserved.
Exit mobile version