How I Automated My Frontend Code Review with Claude Code
Learn how to set up Claude Code hooks to automatically review your frontend code on every commit. Catch accessibility issues, performance anti-patterns, and React best practice violations before they reach PR review.
Code review is essential but exhausting. After spending 2+ hours daily reviewing PRs on our Next.js project, I decided to automate the repetitive parts with Claude Code. Not to replace human review — but to catch the obvious issues so human reviewers can focus on architecture and business logic. Here's the setup that cut our review time by 60%.
The Problem with Manual-Only Review
Our team's PR reviews kept flagging the same issues: missing alt attributes on images, useEffect with incorrect dependency arrays, inline styles instead of Tailwind classes, and components that should have been Server Components marked as client. These are mechanical checks that don't need senior engineer judgment — they need consistency.
Setting Up the Pre-Commit Hook
Claude Code hooks let you run automated tasks triggered by specific events. I set up a hook that runs on every commit to review only the changed files:
{
"hooks": {
"pre-commit": {
"command": "claude --print --prompt-file .claude/prompts/review.md -- $(git diff --cached --name-only)",
"timeout": 30000
}
}
}
The review prompt file is where the magic happens. It defines what to check and how to report findings:
Review the following files for these frontend issues:
## Critical (block commit)
- Security: dangerouslySetInnerHTML without sanitization
- Runtime errors: missing key props in lists, incorrect hook call order
## Warnings (report but do not block)
- Accessibility: missing alt text, missing aria-labels on interactive elements
- Performance: unnecessary use client on components that could be RSC
- Unnecessary useEffect that could be computed during render
- Inline styles that should use Tailwind classes
## Format
Output ONLY issues found, one per line:
[CRITICAL|WARNING] file:line - description
If no issues found, output: LGTM
Results After One Month
The automated review catches roughly 40% of the issues that would have been flagged in manual review. The biggest wins were accessibility violations (caught 95% of missing alt texts) and unnecessary client components (caught 80%). It does not catch architectural concerns or business logic errors — and it should not. Those need human judgment.
Fine-Tuning the Review Prompt
The initial prompt was too noisy — it flagged every minor issue and developers started ignoring it. The fix was separating critical issues (that block the commit) from warnings (logged but non-blocking). This mirrors how ESLint separates errors from warnings, and developers immediately understood the model.
Adding a PR Summary Generator
I also added a hook that generates PR descriptions automatically by analyzing the diff. This saves another 5-10 minutes per PR and produces more consistent, informative descriptions than most developers write manually.
Key Takeaways
- Automate mechanical code review checks (a11y, performance, conventions) with Claude Code hooks
- Keep the review prompt focused — separate critical issues from warnings
- Do not try to replace human review — automate the boring parts so humans focus on what matters
- The pre-commit hook adds about 10-15 seconds per commit but saves hours in PR review
- Review your prompt quarterly — update it as your team's common mistakes evolve
Admin
Cal.com
Open source scheduling — tự host booking system, thay thế Calendly. Free & privacy-first.
Bình luận (0)
Đăng nhập để bình luận
Chưa có bình luận nào. Hãy là người đầu tiên!