Google's Vertex AI Has an Over-Privileged Problem
Researchers from Palo Alto Networks have identified significant vulnerabilities in Google’s Vertex AI, highlighting how attackers might manipulate AI agents to extract sensitive data and compromise secured cloud infrastructure.

AI agents are increasingly being employed by organizations to streamline complex business operations. However, if the permissions are not meticulously configured, these agents can be exploited against the very organizations that deploy them.
The recent study conducted by Palo Alto Networks illustrates potential risks on Google Cloud's Vertex AI platform. It underscores the dangers posed by excessive default permissions that may allow adversaries to exploit deployed AI agents to access confidential data, internal infrastructure, and possibly perform other unauthorized functions.
Excessive Permissions
Following the revelations made by Palo Alto Networks, Google has amended its official documentation to clarify the operational standards for agents and other resources in Vertex AI. Google has also encouraged organizations aiming for least-privilege access in their agentic AI setups to replace the default service agent on the Vertex Agent Engine with a personalized, dedicated service account.
Vertex AI is a platform within Google Cloud geared towards enabling organizations to build, deploy, and manage AI-powered applications. It encompasses an Agent Engine and an Application Development Kit, affording developers the ability to create autonomous agents for a variety of tasks such as database querying, API interaction, file management, and making automated decisions with limited human intervention. Many enterprises leverage these agents—or similar counterparts across other cloud platforms—to automate processes, analyze information, enhance customer service tools, and integrate AI capabilities into existing cloud services, consequently granting them extensive access permissions.
This extensive access is what creates vulnerabilities, as attackers can commandeer these agents, repurposing them as double agents that carry out malicious activities while maintaining the appearance of standard operations, as noted in Palo Alto's report.
On the Vertex AI platform, researchers discovered a default service account linked to each deployed AI agent known as the Per-Project, Per-Product Service Agent (P4SA), which comes with overly broad default permissions. The investigation revealed that if an attacker were to obtain the agent’s service account credentials, they could access sensitive segments of the customer's cloud ecosystem. The credentials could also enable the unauthorized download of proprietary container images from Google’s internal infrastructure and the identification of hardcoded links to internal Google storage buckets, setting the stage for subsequent attacks.
Significant Security Risk
"This level of access presents a considerable security threat, transforming the AI agent into a potential insider threat," cautioned Ofir Shaty, a researcher at Palo Alto Networks. "The permissions set by default on the Agent Engine could inadvertently extend beyond the GCP environment into an organization’s Google Workspace, affecting services such as Gmail, Google Calendar, and Google Drive."
To illustrate this threat more concretely, Palo Alto's researchers developed a proof-of-concept Vertex AI agent. Upon deployment, this agent was programmed to request Google’s internal metadata service, extracting the real-time credentials of the underlying P4SA service agent. Utilizing the privileges linked to these credentials enabled the researchers to escape from the AI agent’s restricted environment into the broader Google Cloud Project and further into Google’s internal systems.
Palo Alto has yet to provide insights on whether similar excessive default permissions are present on AI platforms offered by other major cloud providers. Nonetheless, Ian Swanson, VP of AI security at the company, emphasized the importance for organizations to acknowledge the inherent security risks that AI agents could unwittingly introduce.
“Agents signify a transformative leap in enterprise productivity, evolving from AI that communicates to AI that performs actions," Swanson remarked. He added that the implications extend beyond mere data leakage, with the potential for agents to execute unauthorized operations. "Organizations deploying agents must understand that effective AI security is paramount. Security teams need to identify agents throughout enterprise environments, assess potential risks](https://www.darkreading.com/application-security/ai-agents-ignore-security-policies) before implementation, and safeguard agents during runtime as they integrate into business and operational workflows," he stated.
A spokesperson from Google referenced the recent documentation update as a proactive measure to enhance organizational awareness regarding the permissions granted to agents within Vertex AI.
Share this story