| Availability |
Odoo Online
Odoo.sh
On Premise
|
| Odoo Apps Dependencies |
•
Project (project)
• Discuss (mail) |
| Lines of code | 790 |
| Technical Name |
hst_task_ai_review |
| License | LGPL-3 |
| Website | http://www.hsxtech.net |
| Availability |
Odoo Online
Odoo.sh
On Premise
|
| Odoo Apps Dependencies |
•
Project (project)
• Discuss (mail) |
| Lines of code | 790 |
| Technical Name |
hst_task_ai_review |
| License | LGPL-3 |
| Website | http://www.hsxtech.net |
Task AI Code Review
Description
Community + Enterprise
An AI-powered code review module for Odoo 19 project tasks using the Anthropic Claude API. When a developer moves a task to the "AI Code Review" stage, the module automatically extracts source code from an uploaded ZIP file, sends it along with the task requirements to Claude, and generates a structured review containing a gap analysis, actionable recommendations with file and line references, and a pass/partial/fail readiness status. Supports up to 2 review runs per task with comparative analysis between runs, automatic stage transitions, and developer reassignment on non-pass results.
Setup & Stage Configuration
After installing this module, the "AI Code Review" stage is automatically created and added to all your existing projects. However, you need to position this stage in your project pipeline yourself to match your team's workflow. The stage placement determines when the AI review is triggered and what happens after it completes.
Step 1: Configure Claude API Settings
- → Go to Settings → Project → AI Review (Claude)
- → Enter your Anthropic API key in the Claude API Key field
- → Select your preferred Claude model (Sonnet 4, Opus 4, or Haiku 4.5) — defaults to Claude Sonnet 4 if not set
- → Click Save
Step 2: Position the "AI Code Review" Stage in Your Pipeline
- → Open your project's Kanban view where you see all your stages (columns)
- → The "AI Code Review" stage is created with sequence 25 by default — you can drag and reorder it to place it wherever you want in your pipeline
- → Important: The stage must be placed directly after the "In Progress" stage, because the module only allows tasks to enter "AI Code Review" from the "In Progress" stage
- → Recommended pipeline order: New → In Progress → AI Code Review → PM Review → Done
- → The stage that comes immediately after "AI Code Review" (by sequence order) is where tasks will be automatically moved when they pass the review — so place your next review or done stage right after it
- → If a task fails the review, it is automatically moved back to "In Progress" regardless of stage order
- → If the "AI Code Review" stage does not appear in your project, go to Project → Configuration → Stages, find "AI Code Review", and add your project to its Projects field
Step 3: How the Review Gets Triggered
- → A developer works on a task in the "In Progress" stage
- → They upload a .zip file of their module code in the Code Review Module field on the task form
- → They write a clear task description explaining what the module should do (these become the requirements the AI checks against)
- → They drag the task (or change the stage) to the "AI Code Review" column
- → The module automatically validates the transition, extracts the code, calls Claude API, and writes the review results — no manual button click needed
- → Alternatively, if the task is already in the "AI Code Review" stage, the developer can click the "Run AI Code Review" button in the AI Code Review tab to trigger a review manually
Step 4: What Happens After the Review
- → If the review passes: The task is automatically moved to the next stage after "AI Code Review" in your pipeline (e.g., PM Review or Done)
- → If the review fails or is partial: The task is automatically moved back to "In Progress" and reassigned to the developer who submitted it, so they can fix the identified gaps and resubmit
- → The developer can fix the issues, re-upload the updated .zip file, and drag the task back to "AI Code Review" for a 2nd review run
- → On the 2nd run, the AI compares its new findings against the previous review and reports what was resolved, what is still open, and what is new
- → A maximum of 2 AI review runs are allowed per task — after that, the task cannot be moved to "AI Code Review" again unless the review is reset
- → If the task fails after both runs, an activity is escalated to the developer's manager for manual intervention
Example Pipeline Configurations
- → Development team: New → In Progress → AI Code Review → PM Review → QA Testing → Done
- → Small team: New → In Progress → AI Code Review → Done
- → Strict review: New → In Progress → AI Code Review → Peer Review → PM Review → Staging → Done
- → In all cases, "AI Code Review" must come directly after "In Progress" — the module enforces this rule
Features
AI Code Review Stage
- → Adds a dedicated "AI Code Review" stage to the project task pipeline with sequence 25, positioned before typical PM Review stages
- → Post-install hook automatically creates the stage and associates it with all existing projects so it appears in their kanban views immediately
- → Stage is created as noupdate data, so manual customizations are preserved across module upgrades
Stage Transition Validation
- → Tasks can only be moved to "AI Code Review" from the "In Progress" stage — moving from any other stage raises a validation error
- → A .zip file must be attached in the Documents field before moving to the AI review stage
- → Maximum of 2 AI review runs allowed per task — attempting a third raises a validation error
- → Redundant writes that set stage_id to its current value do not re-trigger the review, preventing double-counting of run attempts
Automatic Review Trigger
- → AI code review is automatically triggered when a task is moved to the "AI Code Review" stage via the write() override
- → Stores the current assignee as "Previous Assignee" before the review begins, for reassignment on non-pass results
- → On the 2nd review run, the previous review summary is saved separately for side-by-side comparison
- → AI run counter is incremented automatically on each stage transition
- → If the automatic review fails, an error state is set and a detailed failure message is posted to the task chatter
Manual Review Action
- → "Run AI Code Review" button available in the AI Code Review tab for manual triggering with a confirmation dialog
- → Button is hidden while a review is in "Processing" state to prevent duplicate submissions
- → "Reset Review" button clears all review data (summary, gaps, recommendations, run count) back to draft state
- → Reset button is only visible when the review is in "Done" or "Error" state
ZIP File Extraction
- → Extracts source code files from uploaded ZIP archives for AI analysis
- → Supports .py, .xml, .csv, .js, .scss, .css, .html, .txt, .md, .rst, .json, .yaml, .yml, .cfg, .conf, and .ini file extensions
- → Automatically skips __pycache__/, .git/, node_modules/, .pyc files, image files (.png, .jpg, .jpeg, .gif, .ico, .svg), font files (.woff, .woff2, .ttf, .eot), translation files (.po, .pot), and test directories (tests/, test/)
- → Files are sorted by Odoo module priority: __manifest__ and __init__ first, then models/, views/, security/, wizard/, controllers/, data/, static/, and report/ directories
- → Enforces a 50 KB per-file size limit — files exceeding this are skipped with a notice
- → Enforces a 300 KB total extraction size limit — remaining files are truncated with a notice once the limit is reached
- → Line numbers are added to every extracted file so the AI can reference exact line numbers in its recommendations
- → Also supports plain text/code file attachments and PDF file attachments (requires PyPDF2)
Claude API Integration
- → Integrates with Anthropic Claude API using the official anthropic Python package
- → Configurable API key stored securely as an Odoo system parameter via Settings
- → Model selection in Settings: Claude Sonnet 4, Claude Opus 4, or Claude Haiku 4.5
- → System prompt instructs Claude to act as a senior Odoo technical reviewer
- → API response capped at 4096 max tokens per review
- → Clear error messages if the anthropic package is not installed or API key is not configured
Three-Dimensional Review Analysis
- → Requirement Alignment: Cross-checks each requirement from the task description against the submitted code, marking each as IMPLEMENTED, PARTIALLY IMPLEMENTED, or MISSING
- → Code Quality: Evaluates against Odoo coding standards (ORM usage, security, naming conventions, decorators, XML view structure, access rights) and Python best practices (type hints, f-strings, exception handling, context managers, dataclasses, pathlib)
- → Functional Validation: Verifies business logic correctness, edge case handling, model relationships, computed fields, constraints, and onchange methods
- → Test files and test directories are explicitly excluded from the review scope
Structured Review Output
- → AI returns a structured JSON response parsed into separate Odoo records
- → Readiness Status: Pass (no critical/major gaps), Partial (major gaps exist), or Fail (critical gaps exist) — displayed as a color-coded badge
- → Review Summary: HTML-formatted summary with a requirement-by-requirement checklist and overall assessment
- → Gap Analysis: Each gap stored as a task.ai.gap record with category (Requirement Alignment / Code Quality / Functional Validation), severity (Critical / Major / Minor displayed as color-coded badges), description referencing specific files and methods, and an Open/Resolved status tracker
- → Recommendations: Each recommendation stored as a task.ai.rec record with priority (High / Medium / Low displayed as color-coded badges), file path with approximate line number reference, and a specific actionable improvement suggestion
- → Gaps are sorted by severity (Critical first) and recommendations are sorted by priority (High first)
- → Response parsing handles markdown code block wrapping and validates all fields against expected enum values with safe fallbacks
Comparative Analysis (2nd Run)
- → On the 2nd review run, the previous review summary is included in the AI prompt for comparison
- → AI is instructed to compare current findings with previous findings and mark each previously identified gap as RESOLVED, STILL OPEN, or PARTIALLY RESOLVED
- → AI identifies any NEW issues not found in the previous review
- → Summary includes a "Previous vs Current" comparison section showing progress made
- → Previous Review Summary is displayed in a separate section on the task form for side-by-side reference
Post-Review Automation
- → On Pass: Task is automatically moved to the next stage (by sequence order within the project) with a chatter notification
- → On Partial/Fail: Task is automatically moved back to "In Progress" stage
- → On Partial/Fail: Task is reassigned to the previous developer (the one who submitted it for review) with a chatter notification listing the action taken
- → Manager Escalation: After 2 failed review runs, an activity (To-Do) is automatically scheduled for the developer's manager, unless the developer has direct reports themselves (checked via HR employee hierarchy, not job title)
Task Form UI
- → "Code Review Module" many2many binary field added to the task form header for uploading ZIP files and other documents
- → Dedicated "AI Code Review" tab added inside the task form notebook
- → Review State displayed as a color-coded badge: Draft (blue), Processing (orange), Done (green), Error (red)
- → Readiness Status displayed as a color-coded badge: Pass (green), Partial (orange), Fail (red), Not Reviewed (muted)
- → Last Review Date shown alongside the status badges
- → Review Runs counter showing how many times AI review has been executed
- → Previous Assignee field shown only when a previous assignee exists
- → Review Summary section visible only when review state is "Done"
- → Previous Review Summary section visible only when a previous summary exists (2nd run)
- → Identified Gaps displayed in an inline editable list with Category, Severity (color-coded badge), Description, and Status (color-coded badge) columns
- → Recommendations displayed in an inline list with Priority (color-coded badge), File/Line, and Suggestion columns
Chatter Integration
- → Review completion is logged in the task chatter with run number (e.g., Run 1/2), status, gap count, and recommendation count
- → Pass result posts a notification that the task has been moved to the next stage
- → Non-pass result posts a notification that the task has been moved back to "In Progress" with the reassigned developer's name
- → Review errors are posted to chatter with the error details for debugging
- → All chatter messages are posted as internal notes (mail.mt_note subtype)
Settings & Configuration
- → Claude API Key field added to Project settings (stored as ir.config_parameter, displayed as a password field)
- → Claude Model selection dropdown added to Project settings with options: Claude Sonnet 4, Claude Opus 4, Claude Haiku 4.5
- → Default model is Claude Sonnet 4 if not configured
- → Settings appear inside the Project app section under an "AI Review (Claude)" block
Security & Access Control
- → Project Users can read, write, and create gap analysis and recommendation records
- → Project Managers have full CRUD access (read, write, create, delete) to gap analysis and recommendation records
- → API key is accessed via sudo() to ensure it is only readable by system-level operations
Technical Details
- → Odoo version: 19.0
- → Module version: 19.0.2.0.0
- → Depends on: project module
- → External Python dependency: anthropic (pip install anthropic)
- → Optional Python dependency: PyPDF2 (for PDF attachment extraction)
- → License: LGPL-3
- → Works on both Odoo Community and Enterprise editions
Screenshots
AI Code Review Tab
Review Results
Gaps and Recommendations
Services
Please log in to comment on this module