
Anthropic acquires Vercept to advance Claude's computer use capabilities
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

We’re launching The Anthropic Institute, a new effort to confront the most significant challenges that powerful AI will pose to our societies.

We’ve made a choice: Claude will remain ad-free. We explain why advertising incentives are incompatible with a genuinely helpful AI assistant, and how we plan to expand access without compromising user trust.

Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

A statement from our CEO on national security uses of AI

Claude Sonnet 4.6 is a full upgrade of the model’s skills across coding, computer use, long-reasoning, agent planning, knowledge work, and design.

Practical techniques for steering GPT-5.4 toward polished, production-ready frontend designs.

Using skills and GitHub Actions to optimize Codex workflows in the OpenAI Agents SDK repos.

We couldn't find the page you were looking for. Search the resource library or explore the links below to keep building.

Five stories from developers building agentic products with the Responses API in its first year.


Anthropic's response to the Secretary of War and advice for customers