Hacker-City
Hacker-City
Get the brief
Technology|March 25, 2026|1 min read

Anthropic's Claude Code gets 'safer' auto mode

The feature is a middle-ground between cautious handholding and dangerous levels of autonomy.

#Anthropic#Claude Code#AI safety#autonomous AI#code execution#permissions#security

Anthropic's Claude Code gets 'safer' auto mode

Anthropic has launched an "auto mode" for Claude Code, a new tool that lets AI make permissions-level decisions on users' behalf. The company says the feature offers vibe coders a safer alternative between constant handholding or giving the model dangerous levels of autonomy.

Claude Code is capable of acting independently on users' behalf, a useful but risky feature as it can also do things users don't want, like deleting files, sending out sensitive data, and executing malicious code or hidden instructions. Auto mode is designed to prevent this, flagging and blocking potentially risky actions before they run and offering the agent a chance to try again or ask a user to intervene.

Right now, auto mode is only available as a research preview for Team plan users. Anthropic says access will expand to include Enterprise and API users in "the coming days."

Anthropic warns the tool is experimental and "doesn't eliminate" risk entirely, recommending developers use it in "isolated environments."

Share this story