| Availability |
Odoo Online
Odoo.sh
On Premise
|
| Odoo Apps Dependencies |
Discuss (mail)
|
| Community Apps Dependencies | Show |
| Lines of code | 5986 |
| Technical Name |
llm_thread |
| License | LGPL-3 |
| Website | https://github.com/apexive/odoo-llm |
| Versions | 16.0 18.0 |
| Availability |
Odoo Online
Odoo.sh
On Premise
|
| Odoo Apps Dependencies |
Discuss (mail)
|
| Community Apps Dependencies | Show |
| Lines of code | 5986 |
| Technical Name |
llm_thread |
| License | LGPL-3 |
| Website | https://github.com/apexive/odoo-llm |
| Versions | 16.0 18.0 |
Easy AI Chat for Odoo
A simple, powerful AI chat module to supercharge your Odoo workflows.
Connect with multiple AI providers and enjoy real-time conversations
Why Easy AI Chat?
Everything you need for AI-powered conversations
Multiple AI Providers
Connect with OpenAI, Anthropic, Grok, Ollama, DeepSeek, and more. Switch models instantly.
Real-Time Chat
Chat instantly with AI, integrated into Odoo's mail system, with full conversation history.
Multimodal Power
Go beyond text—use advanced AI models for rich, actionable responses.
Odoo Integration
Link chats to any Odoo app or object for smarter, context-aware workflows.
Tool Selection
Choose which tools are available for each AI conversation to enhance capabilities.
Conversation History
Full message history preserved for context and reference across sessions.
Getting Started
Set up and start chatting in minutes
Install the Module
Add "Easy AI Chat" and its dependency "LLM Integration Base" to your Odoo environment.
Set Up Providers
Go to LLM → Configuration → Providers, create a new provider, and add your API key.
Fetch Models
Click "Fetch Models" in the provider form to import available AI models using your API key.
Start Chatting
Navigate to Chat in the Odoo menu, create a new thread, and ask AI anything!
Technical Details
Requirements and dependencies
Module Information
llm, mail
Productivity/LLM
18.0.1.0.0
LGPL-3
Pro Tip
Get API keys from providers like OpenAI or Anthropic. Ensure your "LLM Integration Base" module is installed first.
Related Modules
Enhance your AI chat experience
LLM Thread - Easy AI Chat for Odoo
Real-time AI chat interface for Odoo with streaming responses, tool execution, and seamless integration with Odoo's mail system.
Module Type: 📦 Infrastructure
Installation
What to Install
This module is typically auto-installed as a dependency of llm_assistant.
For a complete AI chat experience:
odoo-bin -d your_db -i llm_assistant,llm_openai
Auto-Installed Dependencies
These are pulled in automatically:
- llm (core infrastructure)
- llm_tool (function calling)
- mail, web (Odoo base)
Common Setups
| I want to... | Install |
|---|---|
| Chat with AI in Odoo | llm_assistant + llm_openai |
| Chat with local AI | llm_assistant + llm_ollama |
| Add RAG to chat | Above + llm_knowledge + llm_pgvector |
| Connect external tools | Above + llm_mcp_server |
What is LLM Thread?
LLM Thread brings conversational AI directly into Odoo. It provides the chat UI and message management layer, bridging the frontend interface with the generation engine (llm_generate), provider APIs, and tool execution framework. Chat with AI models from OpenAI, Anthropic, Ollama, and dozens of other providers through a familiar messaging interface. Link conversations to any Odoo record, enable tool execution, and get streaming responses in real-time.
Note: This module provides the chat interface and orchestration. Actual LLM generation is handled by llm_generate module, while llm_assistant provides assistant configurations and prompt templates.
Requirements
- Python: 3.10+
- Odoo: 18.0
- Dependencies: llm, llm_tool, mail, web
- Python Packages: emoji, markdown2
Quick Start
1. Install Module
odoo-bin -d your_db -i llm_thread
2. Configure Provider
Navigate to LLM → Configuration → Providers:
- Create a new provider (e.g., OpenAI)
- Enter your API key
- Click Fetch Models to import available models
3. Start Chatting
Option A - Dedicated Chat Interface:
- Go to LLM → Chat
- Click New to create a conversation
- Select provider and model
- Start chatting!
Option B - From Any Record:
- Open any record (Sale Order, Contact, etc.)
- Click the AI button in the chatter
- Chat with AI in context of that record
4. Enable Tools (Optional)
To let AI execute actions in Odoo:
- Install llm_assistant module for full functionality
- In your thread, select available tools
- AI can now search records, create data, and more
Architecture
┌─────────────┐ EventSource ┌──────────────┐ ┌─────────────┐
│ Browser │ ←──────────────────→ │ Controller │ ───→ │ llm.thread │
│ (OWL UI) │ Streaming SSE │ /generate │ │ Model │
└─────────────┘ └──────────────┘ └──────┬──────┘
│
┌──────────────┐ ┌──────▼──────┐
│ mail.message │ ←─── │ llm.provider│
│ (storage) │ │ (API) │
└──────────────┘ └─────────────┘
- Protocol: Server-Sent Events (SSE) for real-time streaming
- Endpoint: /llm/thread/generate (GET with streaming response)
- Storage: Messages stored in mail.message with llm_role field
- Locking: PostgreSQL advisory locks prevent concurrent generation
Message Flow
- User sends message → POST to /llm/thread/update
- Message saved with llm_role="user" via message_post()
- Generation triggered → /llm/thread/generate endpoint
- Advisory lock acquired for thread (prevents duplicate generation)
- Provider streams response chunks via SSE
- Each chunk updates message body in real-time
- Final message saved with llm_role="assistant"
- Lock released, UI updated via bus notification
Features
Streaming Responses
Real-time token-by-token streaming for immediate feedback:
# Controller streams responses via SSE @http.route("/llm/thread/generate", type="http", auth="user") def llm_thread_generate(self, thread_id, message=None, **kwargs): headers = { "Content-Type": "text/event-stream", "Cache-Control": "no-cache", "X-Accel-Buffering": "no", # Disable nginx buffering } return Response( self._llm_thread_generate(...), direct_passthrough=True, headers=headers, )
Tool Integration
Enable AI to execute tools during conversation:
# Add tools to thread thread.tool_ids = [(6, 0, [ search_tool.id, create_tool.id, calendar_tool.id, ])] # AI can now call these tools during generation # Tools are executed with user's permissions
Concurrent Generation Protection
PostgreSQL advisory locks prevent race conditions:
# Automatic locking during generation with thread._generation_lock(): # Only one generation can run per thread for chunk in provider.chat_stream(messages): yield chunk # Lock automatically released
API Reference
Thread Management
# Create new thread thread = env['llm.thread'].create({ 'name': 'My Chat', 'provider_id': env.ref('llm_openai.provider_openai').id, 'model_id': env['llm.model'].search([('name', '=', 'gpt-4')], limit=1).id, }) # Post user message thread.message_post( body="Hello, AI!", llm_role="user", author_id=env.user.partner_id.id, ) # Post assistant message (markdown auto-converted to HTML) thread.message_post( body="**Hello!** How can I help you today?", llm_role="assistant", author_id=False, ) # Post tool result thread.message_post( llm_role="tool", body_json={ "tool_call_id": "call_123", "function": "search_records", "result": {"count": 5, "records": [...]} }, )
Generation
# Generate response (returns generator for streaming) for event in thread.generate(user_message_body="What's my order status?"): if event['type'] == 'message_create': print("New message:", event['message']) elif event['type'] == 'message_chunk': print("Chunk received") elif event['type'] == 'message_update': print("Final message:", event['message']) elif event['type'] == 'error': print("Error:", event['error'])
Context Access
# Get thread context with related record context = thread.get_context() # Access in Jinja templates # {{ related_record.get_field('name') }} # {{ related_record.get_field('partner_id') }} # {{ related_model }} → 'sale.order' # {{ related_res_id }} → 123
HTTP Endpoints
Generate Response
GET /llm/thread/generate?thread_id=123&message=Hello
Response: Server-Sent Events stream
data: {"type": "message_create", "message": {...}}
data: {"type": "message_chunk", "message": {...}}
data: {"type": "message_update", "message": {...}}
data: {"type": "done"}
Update Thread
POST /llm/thread/<thread_id>/update
Content-Type: application/json
{"name": "New Thread Name", "model_id": 456}
Frontend Components
LLM Chat Container
Main chat interface component using Odoo's mail components:
// llm_chat_container.js import { Component } from "@odoo/owl"; import { Thread } from "@mail/core/common/thread"; import { Composer } from "@mail/core/common/composer"; export class LlmChatContainer extends Component { static template = "llm_thread.LlmChatContainer"; static components = { Thread, Composer }; // ... }
Thread Header
Provider/model selection and thread configuration:
// llm_thread_header.js - Select provider, model, and tools
Tool Message Display
Display tool execution results:
// llm_tool_message.js - Render tool call results
Integration Examples
Add AI Chat to Custom Module
class MyModel(models.Model): _inherit = 'my.model' def action_open_ai_chat(self): """Open AI chat linked to this record""" thread = self.env['llm.thread'].create({ 'name': f'AI Chat - {self.display_name}', 'provider_id': self.env.ref('llm_openai.provider_openai').id, 'model_id': self.env['llm.model'].search( [('name', '=', 'gpt-4o')], limit=1 ).id, 'model': self._name, 'res_id': self.id, }) return { 'type': 'ir.actions.client', 'tag': 'llm_chat_action', 'params': {'thread_id': thread.id}, }
Programmatic Chat
# Use AI programmatically without UI thread = env['llm.thread'].create({ 'name': 'Automated Analysis', 'provider_id': provider.id, 'model_id': model.id, }) # Post question thread.message_post(body="Analyze this data: ...", llm_role="user") # Generate response (requires llm_generate + llm_assistant) for event in thread.generate(): if event['type'] == 'message_update': response = event['message']['body'] break print(response)
Troubleshooting
Chat not responding?
- Check provider API key is valid
- Verify model is active and supports chat
- Check Odoo logs for API errors
Streaming not working?
- Ensure nginx has X-Accel-Buffering: no header
- Check browser console for SSE connection errors
- Verify /llm/thread/generate endpoint is accessible
"Currently generating" error?
- Previous generation may have failed without releasing lock
- Wait a moment or refresh the page
- Check if another tab is generating for same thread
Tools not executing?
- Verify llm_generate and llm_assistant modules are installed
- Check tool is active and assigned to thread
- Ensure user has permission to execute tool actions
Messages not appearing?
- Check browser console for JavaScript errors
- Verify bus notifications are working
- Ensure user has access to llm.thread records
Security
- User-scoped: Each thread belongs to a user
- ACL enforced: Standard Odoo access control rules apply
- Tool permissions: Tools execute with user's permissions
- No shared locks: Advisory locks are per-thread, per-session
Resources
- GitHub Repository
- Changelog
License
This module is licensed under LGPL-3.
© 2025 Apexive Solutions LLC. All rights reserved.
Please log in to comment on this module