The families of victims of a February school shooting in British Columbia opened seven lawsuits Wednesday against OpenAI, the creator of ChatGPT. The lawsuits, filed in federal court in San Francisco, claim that OpenAI’s actions regarding the shooter’s use of its AI allowed the shooting to happen. 

The cases could have major implications for future chatbot safeguards and whether companies can be held liable for how people use artificial intelligence. 

The shooting occurred on Feb. 10 when an 18-year-old former student entered a secondary school in Tumbler Ridge, British Columbia, and opened fire using a modified handgun, killing five children and an education assistant, according to news reports. Investigators allege that the shooter had also killed her mother and half-brother. The combined fatalities made this one of the deadliest shootings in Canadian history. The shooter died at the scene, apparently of a self-inflicted gunshot wound.

The shooter had engaged ChatGPT in conversations involving violence before the attack.

OpenAI says it has taken steps intended to address issues raised by the lawsuits.

“We have already strengthened our safeguards, including improving how ChatGPT responds to signs of distress, connecting people with local support and mental health resources, strengthening how we assess and escalate potential threats of violence, and improving detection of repeat policy violators,” an OpenAI spokesperson told CNET in an email.

OpenAI co-founder and chief executive Sam Altman wrote a letter to the families, which was published on the local news site Tumbler RidgeLines.

“The pain your community has endured is unimaginable,” Altman wrote. 

He referred to the shooter’s ChatGPT account, writing, “I am deeply sorry that we did not alert law enforcement to the account that was banned in June.”

CBS News reports that the shooter’s account was flagged in 2025 for misusing ChatGPT for “violent activities” and then banned. OpenAI told CBS that it considered flagging the account to law enforcement but determined it “did not pose an imminent and credible risk of serious physical harm to others.”

According to The Guardian, the shooter was able to create a second account that OpenAI was unaware of until after the shooting. 

More issues for OpenAI

These are not the only legal and regulatory challenges facing OpenAI over its AI chat products. Earlier in April, Florida officials announced they were investigating OpenAI about whether a shooter who killed two people at Florida State University in Tallahassee used ChatGPT in connection with the attack.

Separately, a March lawsuit filed by Merriam-Webster and Encyclopedia Britannica says OpenAI improperly used copyrighted material to train its AI systems.

(Disclosure: Ziff Davis, CNET’s parent company, filed a lawsuit against OpenAI in 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

The company is also navigating a series of product and business pressures, including shuttering its generative video model, Sora and halting work on an adult mode for ChatGPT.

It has also faced scrutiny from investors after missing certain internal revenue and user growth targets ahead of a potential public offering.

.



Read the full article here

Share.
Leave A Reply

2026 © Prices.com LLC. All Rights Reserved.
Exit mobile version