Hacker-City
Hacker-City
Get the brief
Technology|March 27, 2026|4 min read

Exclusive: Anthropic's Significant Security Lapse Exposes Unreleased Model and Internal Data

Anthropic has unintentionally revealed sensitive information about an upcoming AI model and an exclusive CEO event due to a significant security oversight in its content management system.

#Anthropic#security#AI#data#content management system

AI company Anthropic has inadvertently disclosed crucial information regarding an upcoming model release, an exclusive CEO event, and various internal documents, including images and PDFs, due to a considerable security oversight.

This sensitive information was exposed through the company’s content management system (CMS), utilized by Anthropic to disseminate content across sections of its website.

In total, nearly 3,000 assets linked to Anthropic’s blog, which had not been officially published on the company’s public-facing news or research platforms, were reportedly accessible from this data cache, as assessed by Alexandre Pauwels, a cybersecurity researcher at the University of Cambridge, who was consulted by Fortune for an evaluation of the material.

Following notification by Fortune on Thursday, Anthropic promptly implemented measures to secure the data, restricting its public access.

Before these corrections, Anthropic had stored all website content—such as blog posts, images, and documents—in a centralized system that was openly accessible without requiring a login. Individuals with technical capabilities could submit requests to this publicly-facing system to retrieve information about the files it housed.

Despite certain content remaining unpublished on Anthropic’s website, the underlying system would return any digital assets stored within it to anyone knowledgeable enough to query it. Consequently, unpublished materials—including drafts and internal documents—were susceptible to access, posing a significant security threat.

The issue appears to have originated from the functionality of the CMS used by Anthropic. By default, all assets uploaded to the central data repository were public unless specifically designated as private. The company seemingly overlooked the need to restrict access to certain documents, leading to a vast cache of files being visible in the company's public data lake. Cybersecurity experts reviewing the data noted that many of the company’s assets featured public web addresses.

An Anthropic spokesperson informed Fortune that "an issue with one of our external CMS tools led to draft content being accessible." The spokesperson further attributed this situation to "human error in the CMS configuration."

In recent times, several notable technology companies have encountered incidents involving technical glitches attributed to AI-generated code or AI-driven agents. However, Anthropic, known for its popular Claude AI models and touted for automating much of its internal software development using Claude-based AI coding agents, stated that AI was not a contributing factor in this instance.

The CMS issue was declared “unrelated to Claude, Cowork, or any Anthropic AI tools,” according to the spokesperson. The company also sought to mitigate concerns regarding the severity of some unsecured material. "These materials were early drafts of content considered for publication and did not involve our core infrastructure, AI systems, customer data, or security architecture,” the spokesperson explained.

While many of the documents appear to consist of unused or discarded assets for previous blog posts, such as images, banners, and logos, some included information that could be considered sensitive. Notably, certain documents contained details about forthcoming product announcements, including insights on an unreleased AI model described by Anthropic as its most advanced model to date.

After engaging with Fortune, the company acknowledged that it is currently developing and testing a new model, which it claims represents a "step change" in AI capabilities, featuring markedly improved performance in "reasoning, coding, and cybersecurity" compared to earlier Anthropic models.

Furthermore, the publicly accessible data encompassed information about an exclusive, invitation-only retreat intended for the CEOs of prominent European companies, which Anthropic CEO Dario Amodei is scheduled to attend. An Anthropic representative clarified that the retreat is “part of an ongoing series of events we’ve hosted over the past year” and emphasized that the company is “developing a general-purpose model with meaningful advances in reasoning, coding, and cybersecurity.”

Among the documents were also images intended for internal purposes, including one labeled with a title that refers to an employee’s "parental leave."

This is not the first instance of a technology company inadvertently exposing internal or pre-release assets by failing to secure them before formal announcements. Notably, Apple has experienced similar leaks through its website on two occasions, while gaming companies such as Epic Games and Nintendo have also faced pre-release image and asset leaks via their content delivery systems.

Share this story