| Availability |
Odoo Online
Odoo.sh
On Premise
|
| Lines of code | 743 |
| Technical Name |
os_ai |
| License | LGPL-3 |
| Website | https://github.com/alainbloos/odoo_os_ai |
| Versions | 15.0 16.0 17.0 18.0 19.0 |
Overview
OS AI Base is the core infrastructure module that brings AI capabilities to your Odoo instance. Configure multiple AI providers, route requests to the right model based on capabilities, and keep full audit logs of every LLM call — all from within Odoo.
Powered by litellm, the module supports 100+ LLM providers out of the box, including cloud APIs and locally hosted models via Ollama.
Features
| ⚙ |
Multi-Provider Management Configure any number of AI providers with priority-based sequencing. Switch models without changing code. |
| ⚡ |
Capability Auto-Detect One click to detect what each model can do: text generation, vision, image generation, and image editing. |
| 🔌 |
Capability-Based Routing Requests are automatically routed to the best available provider based on what the task requires. |
| 📋 |
Full Logging & Audit Trail Every LLM call is recorded: provider, model, prompts, response, token usage, cost in USD, duration, and success/error status. |
| 📝 |
Reusable Prompt Templates Manage prompts from the UI. Templates are auto-created when AI-enabled modules are installed and can be edited without touching code. |
| 🏠 |
Local Models via Ollama Run models locally with zero cloud costs. No API key needed. Works with Qwen, Gemma, Phi, Llama, and many more. |
Supported Providers
All routing is handled by litellm, which supports 100+ LLM providers. The following have first-class support with built-in type selection:
| Provider | Example Models |
|---|---|
| OpenAI | GPT-4o, o-series, gpt-image-1, DALL-E |
| Google Gemini | Gemini 2.5 Flash, Gemini Pro, Imagen |
| Anthropic | Claude Sonnet, Claude Opus, Claude Haiku |
| xAI | Grok models |
| DeepSeek | DeepSeek Chat, DeepSeek Reasoner |
| Mistral | Mistral Large, Pixtral |
| Ollama (Local) | Qwen, Gemma, Phi, Llama — any locally hosted model |
Any model from any litellm-compatible provider can be used, including self-hosted endpoints via custom base URLs.
Configuration
- Install the
litellmPython package:pip install litellm - Enable Developer Mode (Settings > General Settings > Developer Tools)
- Navigate to Settings > Technical > OS AI > AI Providers
- Create a provider: select the type, enter your API key and model name
- Click Auto-detect to auto-configure what the model supports
- Set priority order using the sequence field — lower numbers are tried first
Menu locations (Developer Mode required):
- AI Providers: Settings > Technical > OS AI > AI Providers
- Prompt Templates: Settings > Technical > OS AI > AI Prompts
- AI Logs: Settings > Technical > OS AI > AI Logs
Data Privacy
- You control all API keys. Keys are stored in your own Odoo database and never leave your server except to authenticate with the provider you configured.
- No intermediary. This module communicates directly with the AI providers you choose. There is no third-party proxy, relay, or telemetry service.
- Data is sent only to your configured providers. Nothing is transmitted unless you explicitly set up a provider and trigger an AI call.
- Opt-in by design. No AI calls happen until you create a provider and a module actively uses it. Installing the module alone does nothing.
- Local models supported. Use Ollama to run models entirely on your own hardware — no data ever leaves your network.
Technical Details
| Technical Name | os_ai |
| License | LGPL-3 |
| Dependencies | base |
| Python Dependencies | litellm |
Developed by Alain Bloos — GitHub
Please log in to comment on this module