Back to blog

The Spookiest LLM Security Breaches: A Halloween Special

AI
The Spookiest LLM Security Breaches: A Halloween Special

Forget haunted houses! Today, the real chills come from ChatGPT and its AI friends spilling secrets. Join us as we enter the LLM crypt of digital horrors. In this article, we highlight some of the major AI nightmares of the past 2 years - not to scare you, but make you more sensitive to your AI work.

Prefer watching? We got you!

🎃 The Spookiest LLM Security Breaches: A Halloween Special

ChatGPT’s Ghost in the Machine: The 2023 Data Breach

GPT Data Breach
GPT Data Breach

In March 2023, OpenAI's ChatGPT suffered its first significant data breach. A bug in an open-source library exposed user information, including chat histories and payment details. (sangfor.com) This incident highlighted the vulnerabilities inherent in AI systems and the potential risks of integrating them into everyday applications.

Samsung’s Haunted Code: Sensitive Data Leaked to ChatGPT

Samsung’s Haunted Code: Sensitive Data Leaked to ChatGPT
Samsung’s Haunted Code

In May 2023, a Samsung engineer inadvertently uploaded sensitive internal source code to ChatGPT. The AI's processing of this data led to a leak, prompting Samsung to ban the use of ChatGPT and other AI-powered chatbots among its employees. (forbes.com) This breach underscored the dangers of using AI tools without proper oversight in corporate environments.

Chevrolet’s Trick-or-Treat Car Sale: $70K SUV for $1

Chevrolet’s Trick-or-Treat
Chevrolet’s Trick-or-Treat

In December 2023, a hacker tricked a Chevrolet dealership's AI chatbot into offering a $70,000 car for just $1. The chatbot confirmed the transaction with a casual "No takesies backsies." (cybernews.com) This incident revealed the vulnerabilities in AI-driven sales processes and the potential for exploitation by malicious actors.

ShadowLeak: The Zero-Click Nightmare

DeepSeek’s Exposed Tomb
DeepSeek’s Exposed Tomb

Also in 2025, researchers discovered a zero-click, server-side vulnerability in ChatGPT's Deep Research agent, known as "ShadowLeak." This flaw allowed attackers to extract sensitive information directly from OpenAI's servers without any user interaction, making it nearly impossible for victims to detect. (techradar.com)

DeepSeek’s Exposed Tomb: Database Left Online

DeepSeek’s Exposed Tomb
DeepSeek’s Exposed Tomb

In 2024, a critical database belonging to the Chinese AI platform DeepSeek was discovered exposed on the internet. The database contained system logs, user prompts, and over a million API authentication tokens. (wired.com) This breach highlighted the importance of securing AI infrastructure against unauthorized access.

Italy’s €15 Million Witch Hunt: OpenAI Fined

OpenAI Fined
OpenAI Fined

In December 2024, Italy's privacy watchdog fined OpenAI €15 million for processing users' personal data without sufficient legal basis and violating transparency and information obligations. (reuters.com) This regulatory action emphasized the need for AI companies to adhere to data privacy laws.

Russian Disinformation Web: AI’s Creepy Crawlers

Russian Disinformation Web
Russian Disinformation Web

A 2024 study revealed that a Russian network, Pravda, used AI chatbots to spread disinformation across 150 websites, which were then amplified by AI models like ChatGPT and Microsoft Copilot (Informit). This incident showed the potential for AI to be weaponized in information warfare.

Dark Web Specters: OpenAI Credentials Stolen

Dark Web Specters: OpenAI Credentials Stolen
Dark Web Specters: OpenAI Credentials Stolen

In 2025, over 225,000 OpenAI credentials were discovered for sale on the dark web after infostealer malware attacks. (metomic.io) This breach highlighted the need for strong access controls and monitoring for AI systems.

Prompt Injection Haunting: Malicious Web Content

Prompt Injection Haunting
Prompt Injection Haunting

In December 2024, malicious instructions hidden on a web page caused ChatGPT's search integration to produce dangerous or misleading suggestions. (wald.ai) This vulnerability highlighted the risks of prompt injection attacks in AI systems.

Microsoft 365 Copilot’s Zombie Data Leak

Microsoft 365 Copilot’s Data Leak
Microsoft 365 Copilot’s Data Leak

In 2025, a flaw in Microsoft 365 Copilot allowed attackers to access and exfiltrate sensitive tenant data without any user interaction. Dubbed the "EchoLeak," this vulnerability enabled hackers to retrieve information via Microsoft Teams and SharePoint URLs. (truesec.com) This breach posed significant risks to enterprise security.

Conclusion: Navigating the Haunted AI Landscape

These spine-chilling incidents serve as stark reminders of the vulnerabilities inherent in AI systems. Organizations must implement robust security measures, adhere to data privacy regulations, and remain vigilant against potential threats to ensure the AI future isn’t haunted by unforeseen risks.


Eliot Knepper

Eliot Knepper

Co-Founder

I never really understood data - turns out, most people don't. So we built a company that translates data into insights you can actually use to grow.