Tools
Claude Code Now Features AI-Powered Code Review
![]()
Anthropic is supercharging developer workflows with the introduction of Code Review for Claude Code. This innovative new system brings an AI-powered, agent team-based approach to scrutinizing Pull Requests (PRs), aiming to catch subtle bugs and enhance code quality before merge. Modeled on Anthropic's own rigorous internal review processes, this feature promises a deeper, more reliable review experience.
Currently available in research preview for Team and Enterprise plans, Claude Code's new Code Review is designed to go beyond surface-level checks. It's a significant step forward for teams looking to maintain high code standards and streamline their development cycles. For a comprehensive overview, check out the official Code Review for Claude Code Announcement.
What It Does: Deep Dives by AI Agent Teams
When a new PR is opened, Claude Code's Code Review system dispatches a specialized team of AI agents. These agents work in parallel to meticulously identify potential bugs, verifying them to filter out false positives and then ranking them by severity. The findings are then conveniently presented directly on the PR, appearing as both a high-signal overview comment and specific in-line comments for immediate context.
This sophisticated solution is designed for depth, making it a more thorough—and consequently, more resource-intensive—option compared to the existing open-source Claude Code GitHub Action. The review process intelligently scales with the complexity of the PR; larger changes trigger more agents and a deeper analysis, ensuring every line gets the attention it needs. On average, a complete review takes approximately 20 minutes.
Why It Matters: Elevating Code Quality and Catching Critical Bugs
The impact of this new AI-powered review system is already impressive. Internally at Anthropic, Code Review has dramatically increased the proportion of PRs receiving substantive comments from 16% to 54%. This suggests a significant boost in the depth and quality of feedback developers receive. Engineers' confidence in the system is high, with less than 1% of findings marked incorrect.
The system's ability to uncover critical issues is particularly noteworthy. For instance, it successfully flagged a subtle, one-line change in a production service that would have completely broken authentication—a bug easily missed by human eyes. It also surfaced a pre-existing type mismatch bug in adjacent code during a ZFS encryption refactor for TrueNAS's open-source middleware, as detailed in this TrueNAS middleware pull request. On larger PRs (over 1,000 lines changed), 84% receive findings, averaging 7.5 issues per PR, while even small PRs (under 50 lines) see findings 31% of the time, averaging 0.5 issues. It's important to note that while Code Review provides invaluable insights, it does not approve PRs; human approval is still required to merge code. For more insights into how Claude is enhancing coding workflows, explore More Claude Code Articles.
How to Get Started: Availability and Pricing
Claude Code Review is currently available as a research preview for users on Team and Enterprise plans. Admins can enable Code Review directly within their Claude Code settings, install the GitHub App, and then select the repositories where they want reviews to run. For developers, once enabled, reviews run automatically on new PRs without any additional configuration.
Reviews are billed based on token usage, with average costs typically ranging from $15 to $25 per review, scaling with the PR's size and complexity. Organizations have fine-grained control over usage and spend through monthly caps, repository-level controls, and an analytics dashboard to track reviewed PRs and costs. You can find more details on Claude Pricing.
Read more: Integrate Claude Code with GitHub Actions to start enhancing your team's code review process today.