Anthropic invests $100 million into the Claude Partner Network
We’re launching the Claude Partner Network, a program for partner organizations helping enterprises adopt Claude.
We’re launching the Claude Partner Network, a program for partner organizations helping enterprises adopt Claude.
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

A statement from our CEO on national security uses of AI

We’re upgrading our smartest model. Across agentic coding, computer use, tool use, search, and finance, Opus 4.6 is an industry-leading model, often by wide margin.

Anthropic's response to the Secretary of War and advice for customers
We’re launching The Anthropic Institute, a new effort to confront the most significant challenges that powerful AI will pose to our societies.
We’ve made a choice: Claude will remain ad-free. We explain why advertising incentives are incompatible with a genuinely helpful AI assistant, and how we plan to expand access without compromising user trust.
An update to Anthropic's policy to mitigate catastrophic risks from AI
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.