--- name: ai-tool-code-review description: "Systematic code review for AI-generated code from tools like Lovable, Bolt, v0, and Cursor. Catches AI-tool-specific anti-patterns that standard reviews miss." --- # AI Tool Code Review Review code generated by AI coding tools (Lovable, Bolt, v0, Cursor, etc.) for the specific anti-patterns these tools commonly produce. ## Configuration **Set the git author filter for your AI tool:** ``` AI_TOOL_AUTHOR: "gpt-engineer" ``` Common author values by tool: | Tool | Git Author | |------|-----------| | Lovable | `gpt-engineer` | | Bolt | Check your git log | | Cursor | Usually your own git author | | v0 | Check your git log | For tools that commit as you (e.g., Cursor), use a branch name filter or date range instead. **Review log location (optional):** ``` REVIEW_LOG: ./docs/ai-code-reviews.md ``` ## When to Use - After an AI tool generates or modifies code - Before merging AI-generated branches into main - As a periodic audit of accumulated AI-generated changes - When something feels off but you can't pinpoint why ## AI-Tool-Specific Anti-Patterns These are the patterns to check for. Standard code reviews focus on logic and style. This review focuses on the structural mistakes AI tools make repeatedly: ### 1. Orphaned Files Files that were created but never imported or referenced anywhere. ```bash # Find files added in the AI tool's commits git diff --name-only --diff-filter=A # Then check if each new file is actually imported/used ``` ### 2. Static Data That Should Be Dynamic Hardcoded arrays, objects, or lists that should come from an API call, database query, or configuration file. AI tools love generating realistic-looking static data because it makes demos look good. **Look for:** - Large arrays of objects with realistic-looking data - Hardcoded lists that match database table contents - Mock data in production components ### 3. Not Reusing Existing Components Creating a new component when an existing one does the same thing. AI tools don't always know what's already in the codebase. **Look for:** - New button, card, modal, or form components when existing ones could be used - Duplicate styling patterns - Components with near-identical prop interfaces to existing ones ### 4. Not Reusing Existing Hooks/Utilities Creating new utility functions or hooks when equivalent ones already exist in the codebase. **Look for:** - New API calling functions when there's an existing HTTP client - New formatting utilities when existing ones handle the same case - Duplicate state management patterns ### 5. DRY Violations Copy-pasted code blocks across files. AI tools often duplicate rather than abstract. **Look for:** - Identical or near-identical code blocks in multiple files - Components that differ by only a few props but share 90% of their code - Repeated fetch/transform/render patterns ### 6. Hardcoded Values Instead of Existing Constants Using literal values instead of referencing existing constants, enums, or configuration. **Look for:** - Hardcoded URLs when a base URL constant exists - Hardcoded strings that match existing enum values - Magic numbers that have named constants elsewhere ### 7. Ignoring Project Conventions Not following the project's established patterns for file naming, folder structure, state management, error handling, etc. **Look for:** - Different file naming convention than the rest of the project - Different state management approach than existing features - Missing error handling that other similar features include - Different import patterns or module resolution ## Process ### Step 1: Find AI Tool Commits ```bash # Find commits by the AI tool's git author git log --author="$AI_TOOL_AUTHOR" --oneline --since="" # Or find by branch git log main.. --oneline ``` ### Step 2: Get the Diff Range ```bash # Get the full diff of AI-generated changes FIRST_COMMIT=$(git log --author="$AI_TOOL_AUTHOR" --reverse --format="%H" | head -1) LAST_COMMIT=$(git log --author="$AI_TOOL_AUTHOR" --format="%H" | head -1) git diff $FIRST_COMMIT~1..$LAST_COMMIT ``` ### Step 3: Dispatch Code Review For each anti-pattern category above: 1. Examine the diff for instances of that pattern 2. Check the existing codebase for components/utilities/constants that should have been reused 3. Note specific findings with file paths and line numbers ### Step 4: Present Findings ```markdown ## AI Tool Code Review — [Date] **Tool:** [which AI tool] **Commits reviewed:** [range] **Files changed:** [count] ### Critical Issues - [ ] [Issue description with file path and line number] ### Important Issues - [ ] [Issue description] ### Minor Issues - [ ] [Issue description] ### Summary [Overall assessment: how much cleanup is needed, recurring patterns, recommendations] ``` ### Step 5: Fix Issues Work through findings with the user: - Critical issues first (orphaned files, broken references) - Then important issues (DRY violations, unused components) - Then minor issues (convention mismatches) ### Step 6: Update Review Log (Optional) Append findings to the review log for pattern tracking: ```markdown ## [Date] — [Tool Name] Review **Commits:** [range] **Issues found:** [count by severity] **Recurring patterns:** [which anti-patterns appeared again] **Action taken:** [what was fixed] ``` Over time, the review log reveals which anti-patterns a specific tool produces most frequently, letting you focus reviews on the highest-risk areas. ## Tips - **Run this before merging, not after.** It's much easier to fix issues on a feature branch than to untangle them from main. - **Check the whole codebase, not just the diff.** The most common issue — not reusing existing components — requires knowing what already exists. - **Track patterns over time.** If a tool consistently creates orphaned files, add a pre-merge check for it. - **Don't assume AI-generated code works just because it looks right.** Static data that looks realistic is a common trap.