Sam Altman Apologizes to British Columbia Community, Wonders Who Could Have Stopped Such Violence

OpenAI, in a nameless blog post:

Mass shootings, threats against public officials, bombing attempts, and attacks on communities and individuals are an unacceptable and grave reality in today’s world. These incidents are a reminder of how real the threat of violence is—and how quickly violent intent can move from words to action.

People may also bring these moments and feelings into ChatGPT. They may ask questions about the news, try to understand what happened, express fear or anger, or talk about violence in ways that are fictional, historical, political, personal, or potentially dangerous. We work to train ChatGPT to recognize the difference—and to draw lines when a conversation starts to move toward threats, potential harm to others, or real-world planning.

We’re sharing what we do to minimize uses of our services in furtherance of violence or other harm: how our models are trained to respond safely, how our systems detect potential risk of harm, and what actions we take when someone violates our policies. We are constantly improving the steps we take to help protect people and communities, guided by input from psychologists, psychiatrists, civil liberties and law enforcement experts, and others who help us navigate difficult decisions around safety, privacy, and democratized access.

Maggie Harrison Dupré, writing at Futurism:

Reading it, someone with limited context would come away with the impression that the company was talking about concerns that were still theoretical: that it’s proactively trying to head off bad things that might happen.

That suggestion is bizarre, though, because the reality is that OpenAI’s flagship chatbot has already been linked to a wide range of real-world violence.

In fact, the most extraordinary thing that OpenAI neglected to mention was what almost certainly motivated the post in the first place: the company published the blog as news organizations — Futurism included — were reaching out to ask the company for comment on a new round of seven lawsuits it’s facing from the families of the victims of the February school massacre in Tumbler Ridge, British Columbia, which would be made public the next day.

Though the blog post made no mention of it, the Tumbler Ridge shooter was a ChatGPT user. Weeks after the tragedy rocked the rural town in February of this year, the Wall Street Journal revealed that back in June 2025, OpenAI’s automated moderation tools had flagged the shooter’s account for graphic descriptions of gun violence. Human reviewers were so alarmed that several pushed OpenAI leaders to alert local officials. Those leaders chose not to, and the company moved instead to deactivate that specific account; as OpenAI later admitted, though, the shooter simply opened a new account — a tactic that OpenAI’s customer service has been found encouraging users to do post-deactivation — and continued to use the service.

Last week, Sam Altman offered an apology to the Tumbler Ridge community, writing:

I want to express my deepest condolences to the entire community. No one should ever have to endure a tragedy like this. I cannot imagine anything worse in this world than losing a child. My heart remains with the victims, their families, all members of the community, and the province of British Columbia.

I am deeply sorry that we did not alert law enforcement to the account that was banned in June. While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.

I reaffirm the commitment I made to the Mayor and the Premier to find ways to prevent tragedies like this in the future. Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again.