AI Tool Code Review
Systematic code review for AI-generated code from tools like Lovable, Bolt, v0, and Cursor. Catches the specific anti-patterns these tools produce that a standard review would miss.
The Problem With AI-Generated Code
Tools like Lovable, Bolt, v0, and Cursor are genuinely useful for generating code quickly. But they produce a specific category of mistakes that a standard code review wouldn't think to check for.
A human developer who creates a new button component probably checked whether a button component already existed. An AI tool almost certainly didn't. A human developer who needs a list of items knows to fetch them from the database. An AI tool might generate a realistic-looking hardcoded array that passes a visual review but will never update when the data changes.
These aren't bugs in the traditional sense. The code works. It passes tests. It looks professional. But it's structurally wrong in ways that compound over time.
The Anti-Pattern Checklist
This skill checks for seven specific anti-patterns that AI tools produce repeatedly:
- Orphaned files — created but never imported anywhere
- Static data that should be dynamic — hardcoded arrays that should be API calls
- Not reusing existing components — new components when existing ones would work
- Not reusing existing hooks/utilities — duplicate utility functions
- DRY violations — copy-pasted code across files
- Hardcoded values — literal values instead of existing constants
- Ignoring project conventions — different patterns from the rest of the codebase
A standard code review might catch items 5-7. Items 1-4 are the AI-specific ones that slip through because they require knowledge of the existing codebase that the AI tool didn't have when generating code.
Works for Any AI Coding Tool
The skill is configured with a git author filter. For Lovable, that's gpt-engineer. For other tools, check your git log for the author they use. For tools like Cursor that commit as you, filter by branch name or date range instead.
The anti-patterns are the same regardless of which tool generated the code. The specific tool matters less than the structural review.
The Review Log
An optional feature: the skill can append findings to a review log file. Over time, this reveals which anti-patterns a specific tool produces most frequently. If Lovable consistently creates orphaned files but never violates DRY, you know where to focus your review time. Pattern tracking turns a reactive review into a proactive checklist.
Setup
- Download the skill file below
- Create a directory:
.claude/skills/ai-tool-code-review/ - Save the file as
SKILL.mdin that directory - Update the
AI_TOOL_AUTHORto match your AI tool's git author - Optionally set a
REVIEW_LOGpath for pattern tracking
Ready to use this skill?
Drop it into your .claude/skills/ folder and you're ready.
This guide was my gift to you. I want everyone to be able to punch above their weight class by leveraging AI to do more with what they've got.
If this helped and you want to know how I help companies through AI consulting, mentoring, or workshops — sign up for my email list or reach out below.