Legal Brief: Generative AI and Privacy Clash
January 21, 2026
Security Business Magazine
Types : Bylined Articles
Records of conversations with AI assistants could be used as evidence, creating a new data risk for you and your company.
Key Highlights
- Your AI chatbot conversations may not be private: A federal magistrate ordered OpenAI to produce 20 million anonymized ChatGPT logs in the New York Times copyright lawsuit—raising questions about whether personal or business discussions with AI assistants are discoverable in litigation or government investigations.
- NYT lawsuit tests “fair use” vs. copyright infringement: The Times accuses OpenAI and Microsoft of training chatbots on millions of articles without permission, producing near-verbatim excerpts that threaten journalism revenue—defendants claim fair use, but discovery rulings suggest chat logs are fair game for evidence.
- Security executives should audit AI privacy policies now: Even deleted or anonymized conversations may be preserved and discoverable—review how AI tools store, access, and protect data before revealing personal or company details to assistants like ChatGPT, Alexa, or work-related chatbots.
Technology is so amazing that you can have legitimate, nuanced conversations with computers every day. I use Alexa for home automation, general reference (news, sports, etc.), and sometimes silly discussions about my family and daily life.
It is striking how smart this technology has become, and how quickly, but are my and your private conversations or messages with Alexa or other chatbots, like ChatGPT, subject to discovery by the government or by parties in civil litigation? The answer may be yes…and the privacy implications could be profound.