Two U.S. criminal cases have exposed how private conversations with AI chatbots like ChatGPT can be used as evidence, raising major privacy concerns amid growing data monetisation by tech companies.
Conversations with AI chatbots are facing new scrutiny after two U.S. criminal cases revealed how personal exchanges with ChatGPT were cited as evidence, sparking fears over privacy and data protection.
Police in Missouri charged 19-year-old college student Ryan Schaefer after he allegedly confessed to vandalising 17 cars in a late-night ChatGPT conversation, asking the AI, “how f**ked am I bro?” The case, reportedly the first of its kind, was followed by another in California involving 29-year-old Jonathan Rinderknecht, accused of starting the deadly Palisades Fire after allegedly using ChatGPT to create images of a burning city.
OpenAI CEO Sam Altman has acknowledged that there are currently “no legal protections” for users’ chats, noting that “people talk about the most personal shit in their lives to ChatGPT.”
Experts warn that the vast troves of intimate data shared with AI are increasingly being exploited. Meta recently announced plans to use AI chat data to serve targeted ads across Facebook and Instagram, with no opt-out option.
Cybersecurity analysts say this growing overlap between AI utility and surveillance could spark a new tech reckoning, echoing the fallout of the Cambridge Analytica scandal.
As Altman put it, people are now “trusting AI like a therapist, but without the legal privilege.”