Two deaths, six injuries and a string of alleged chatbot prompts have pushed one of Silicon Valley's biggest companies into a legal category it has so far mostly avoided. Florida Attorney General James Uthmeier said this week that prosecutors are examining whether OpenAI's ChatGPT helped Florida State University shooting suspect Phoenix Ikner plan the April 17, 2025 attack, and whether that support crosses from product failure into criminal exposure. Reuters first reported on April 9 that Florida had opened a probe into OpenAI over alleged harm to minors, national-security concerns and the possibility that ChatGPT assisted the FSU gunman. By April 21 and April 22, reports from Reuters, CNN, NBC News, Fortune and Bloomberg Law showed the inquiry had escalated into a criminal phase, with subpoenas for internal policies, training materials, reporting procedures and records tied to threats of harm. OpenAI's response has been direct: ChatGPT, the company says, returned factual information available across the public internet and did not encourage violence. That defense matters because OpenAI is no longer a niche lab arguing over edge cases in a research sandbox. ChatGPT now serves more than 900 million weekly users, according to OpenAI's February update, which means a single criminal case now tests the legal logic of AI at mass-market scale.
Florida is testing whether chatbot output can count as criminal assistance

Reuters and CNN say Florida's subpoenas focus on prompts tied to weapons, timing and campus movement before the April 17, 2025 attack.
The mechanical shift in this case is easy to miss if it is treated as another political broadside against Big Tech. Civil investigations ask whether a company misled users, broke consumer-protection law or failed to police foreseeable harms. A criminal investigation asks a harsher question: whether the conduct, the design choices behind it, and the company's response to warning signs fit theories such as aiding, abetting or counseling a crime. That is a much narrower lane legally, but it is also a more dangerous one commercially.
CNN reported that Uthmeier said prosecutors reviewed prompts about weapons, ammunition, timing and campus foot traffic. NBC News reported that Florida's office sought documents covering how OpenAI detects and escalates threats of serious harm. Bloomberg Law said subpoenas requested policies, internal training materials and records tied to dangerous-user reporting. Those details suggest prosecutors are not only reconstructing a suspect's use of the product; they are trying to map the company's internal decision chain.