AI AGENTS
Az Anthropic tanulmánya azt méri, hogyan adnak a felhasználók függetlenséget az AI ágenseknek
Anthropic analyzed millions of interactions across Claude Code and its public API to measure how much independence people grant AI agents in practice. The longest Claude Code sessions nearly doubled in duration over three months, from under 25 minutes to over 45 minutes of continuous work, an increase that suggests existing models can handle more autonomy than they currently exercise. Experienced users grant Claude more independence by enabling auto-approve in over 40 percent of sessions but also interrupt more frequently, indicating a shift from reviewing each action to monitoring and intervening when needed.
- Longest Claude Code sessions increased from under 25 minutes to over 45 minutes
- Experienced users enable auto-approve in over 40 percent of sessions
- Software engineering accounts for nearly 50 percent of agent activity on the public API
- Claude Code asks for clarification more than twice as often as humans interrupt it