NurPhoto/Contributor/NurPhoto via Getty ImagesFollow ZDNET: Add us as a preferred source on Google.
ZDNET key takeawaysSome OpenAI customer data was exfiltrated in a supply chain attack.The attack only affected visitors to OpenAI’s API documentation.The damage was minimal yet noteworthy.In case you missed it, which would have been easy to do given the timing, OpenAI — the company responsible for generative AI solutions like ChatGPT and Sora — announced on Thanksgiving eve that some of its customer data had been stolen as the result of a type of cyber intrusion known as a supply chain attack.
A supply chain attack occurs when, in targeting a major tech brand like OpenAI, threat actors launch their attack against one of the third-party solutions used by that brand.
Also: OpenAI is training models to ‘confess’ when they lie – what it means for future AI
Supply chain attacks have become the “in-thing” for threat actors. If you’re a cybercriminal and the main target of your attack (in this case, OpenAI) is doing a good job with its defenses, there’s always a chance that one of its suppliers is vulnerable. For the hundreds of global brands whose Salesforce data was stolen, the threat actors also conducted a supply chain attack on Salesloft’s Drift, a third-party Salesforce add-on used by many Salesforce customers to integrate AI-driven chatbot functionality into their websites and apps.
(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
In the case of the supply chain attack on OpenAI’s data, the threat actors targeted Mixpanel, a provider of analytics data that, as a result of the incident, OpenAI no longer uses.
How threat actors got to OpenAI’s user dataThe attack was detected by Mixpanel on November 8. Mixpanel notified OpenAI of the breach on November 9, offering more details to OpenAI about the affected dataset on November 25. OpenAI then made its own summary of the affair public on November 26, one day before Thanksgiving.
Like almost all data breaches, a form of social engineering was involved. Someone with access to Mixpanel’s systems was successfully smished — the SMS version of phishing — for their credentials to a Mixpanel system that housed OpenAI’s data (and possibly similar data belonging to other Mixpanel customers).
Also: Your favorite AI tool barely scraped by this safety review – why that’s a problem
I reached out to Mixpanel’s CMO, Stephanie Robotham, for this story, but she hasn’t responded. OpenAI got back to me immediately.
Thankfully, general users of ChatGPT and other OpenAI solutions were not impacted. The analytics data that was compromised only covered user accounts associated with OpenAI’s developer portal, an OpenAI subdomain located at platform.openai.com, where software developers can discover how to engage with the company’s API.
What is an API, and who would use one?APIs are something I know a thing or two about. For almost 10 years, I served as the editor-in-chief of ProgrammableWeb.com, which was widely regarded as the official journal of the API economy.
When most of us interact with an app or website, we rely on that app or website’s user interface (UI) to do our bidding. An API is the same thing as a UI. It’s just built for a different type of user — an application. In the same way that you can filter eBay’s automobile listings to 1964 Corvette Convertibles (my dream car) through eBay’s UI, an external application can programmatically do the same thing, and sometimes even more powerful things, through eBay’s API.
Much of the seismic growth of the AI category is attributable to third-party applications and their developers who rely on APIs to build the functionality of services like ChatGPT and Sora directly into their tools. APIs are a big deal in the world of AI. It’s nearly impossible to discuss AI without also mentioning the Model Context Protocol (MCP), which essentially serves as a standard API that enables any app to work programmatically with any large language model (LLM).
Also: AI models know when they’re being tested – and change their behavior, research shows
Many of the AI-driven video production tools available are clever front ends that leverage their extraordinary capabilities from the APIs offered by existing video/diffusion models (which are architecturally distinct from LLMs), such as OpenAI’s Sora, Google’s VEO, and Luma’s Dream Machine. Some of these tools give users the option of choosing which model to use when producing a video (in which case, the tool is simply switching from one model’s API to another’s).
Therefore, it was primarily developers with an interest in using OpenAI’s APIs that were impacted by this breach. And while any data breach is bad, those developers have probably taken comfort in knowing that sensitive information, such as their passwords and any OpenAI API keys, was not compromised. According to OpenAI, for registered users who engaged its subdomain, the following fields of information were exfiltrated in the breach:
The name provided on the API account Email address associated with the API accountApproximate coarse location based on API user browser (city, state, country)Operating system and browser used to access the API accountReferring websitesOrganization or user IDs associated with the API account Additionally, OpenAI is reaching out directly to the impacted users to inform them of any precautionary remedial steps they should take. OpenAI spokesperson Nico Felix told ZDNET, “We proactively notified all users and customers who may have at some point accessed platform.openai.com. That outreach was intentionally broad to ensure no potentially affected customer was left out.”
Also: Anthropic wants to stop AI models from turning evil – here’s how
For regular users of ChatGPT and OpenAI’s other services, no specific action is required in response to this attack. However, the company used the incident as a reminder to all users that they should take advantage of the additional account security that’s available to them through OpenAI’s multifactor authentication option (found in the Security section of OpenAI’s user settings dialog, as shown in the screenshot below):
OpenAI reminded users that MFA is available to them if they want to improve the security of their ChatGPT credentials.
Screenshot by David Berlind/ZDNETUnderstanding the breach Again, all breaches are bad. But while this breach might have tarnished the OpenAI brand, it pales in comparison to the impact of breaches that we’ve witnessed over recent years.
Even so, dozens of media outlets rushed out articles with all sorts of sketchy conclusions and advice. For example, one site noted that “if you use third-party tools that plug into OpenAI’s APIs….you should be aware that you’re at risk.”
Also: AI’s not ‘reasoning’ at all – how this team debunked the industry hype
According to OpenAI, this assertion is false.
Based on my knowledge of APIs and how developers use them, the user of an application that relies on OpenAI’s APIs does not inherit the risk that developers were exposed to in this incident. However, I double-checked with OpenAI’s Felix, who told me: “The incident only impacted people who were using platform.openai.com; customers of a developer’s app were not impacted.”
Other suggestions that cropped up across the web recommended changing your ChatGPT password. I have no idea why. It could be just the standard ill-founded knee-jerk if-then-else response: “If there’s a breach, then change password. Problem solved? Nope, but who cares? I did something about it.”
Also: GPT-5 is speeding up scientific research, but still can’t be trusted to work alone, OpenAI warns
When I first saw this advice (before I had the gory details of the attack), I decided as a matter of responsible credential hygiene to double-check my ChatGPT credentials. Heck, on Thanksgiving Day, it seemed like a good use of the time between when I started to cook the turkey and when the side dishes had to go into the oven.
This is when I discovered I probably should have made a better choice when initially registering to use ChatGPT. Instead of creating a dedicated credential for my ChatGPT usage, I opted to sign up and log in with my Google account — a decision that is reversible for many other sites with the same option, but not for OpenAI’s sites. According to Felix, OpenAI has no imminent plan to make that decision reversible. I really wish it would.
Had I known about the permanence of my decision when I originally had a choice between logging in via Google’s SSO and establishing a dedicated user ID and password for ChatGPT, I would have taken the latter route.
Finally, OpenAI also used the incident as a reminder to all internet users that any personal information that gets exfiltrated in the course of an attack, even if that information is non-secretive, minimally arms threat actors with some additional data that they’ll use to personalize their social-engineering attempts better and make them seem more convincing.
Security



GIPHY App Key not set. Please check settings