Chad, however, remained silent.
Then, shortly before noon on April 17, 2025, Chad got the most ominous text of all: Phoenix asked how to disengage the safety on a shotgun. Chad answered, and less than three minutes later, Phoenix opened fire in the Florida State University student union, killing two people and injuring several others.
Everything about that story is true, according to officials, except for one crucial fact: The man accused in the shooting, Phoenix Ikner, wasn’t texting with a person named Chad, but rather with an entity we call Chat. He was speaking, as you may well have realized, with ChatGPT, the most popular artificial intelligence platform in the world.
According to logs obtained by news outlets from the state’s attorney’s office, Chat supplied its interlocutor with information regarding firearms and school shootings. Chat told him when the student center would be busiest. And, yes, Chat told him how to disengage the safety on his gun.
Given these allegations, it shouldn’t surprise anyone that the attorney general of Florida, James Uthmeier, has opened a criminal investigation of OpenAI, the parent company of ChatGPT. As Uthmeier said at a news conference in Tampa, “My prosecutors have looked at this, and they’ve told me if it was a person on the other end of the screen, we would be charging them with murder.”
Lest you think that the Florida State story is a tragic one-off, an aberrational tale of an A.I. run amok, I’d urge you to read Mark Follman’s long, disturbing report in Mother Jones chronicling incident after incident in which ChatGPT provided encouragement and assistance to violent and suicidal individuals.
Source:
www.nytimes.com
