| Availability |
Odoo Online
Odoo.sh
On Premise
|
| Lines of code | 1127 |
| Technical Name |
ai_llm_provider |
| License | LGPL-3 |
| Website | https://www.odxbuilder.com/ |
| Availability |
Odoo Online
Odoo.sh
On Premise
|
| Lines of code | 1127 |
| Technical Name |
ai_llm_provider |
| License | LGPL-3 |
| Website | https://www.odxbuilder.com/ |
One API for every LLM provider.
OpenAI, Anthropic, Gemini, Groq, Mistral, ElevenLabs, Ollama, OpenRouter, and any OpenAI-compatible endpoint. Configure once, call from any Odoo module.
10 providers, 20+ models
Setup
Add a provider in 30 seconds.
Pick a provider type, paste your API key, add your models. Test the connection with one click.
Anthropic with Claude Sonnet 4.6 and Claude Opus 4.6
Settings
Managed from Odoo Settings.
Providers appear under the ODX AI section. API keys are restricted to system administrators.
API
Two methods. That covers it.
chat_completion() for a full response. chat_completion_stream() for SSE streaming. Both normalize tool calls, reasoning content, and usage across all providers.
("code", "=", "anthropic"),
("is_active", "=", True),
], limit=1)
# Synchronous
result = provider.chat_completion(
"claude-sonnet-4-6",
[{"role": "user", "content": "Summarize this invoice."}],
tools=tool_definitions,
)
answer = result["choices"][0]["message"]["content"]
# Streaming
for event in provider.chat_completion_stream("claude-sonnet-4-6", messages):
if event["type"] == "content_delta":
yield event["delta"]
elif event["type"] == "result":
tool_calls = event["result"]["choices"][0]["message"].get("tool_calls")
Under the Hood
What it handles for you.
Tool Calling
Normalizes function call formats across providers. Handles streamed argument chunks and multi-call responses.
Reasoning Content
Extracts thinking/reasoning blocks from providers that emit them. Yields them as separate stream events so you can display or discard them.
Rate Limits
Parses Retry-After headers. Retries transient errors with capped backoff. Fails fast on long rate limits instead of blocking Odoo workers.
SSRF Protection
Blocks private IPs and internal hostnames on provider URLs. Ollama is exempted since it runs locally.
Text-to-Speech
Generate audio from text using ElevenLabs or other TTS providers. Uses the same auth and retry infrastructure as chat completions.
Audio Transcription
Transcribe audio via OpenAI Whisper. Supports webm, mp4, wav, mp3, ogg, and m4a.
Providers
Supported out of the box.
| Provider | Type | Auth |
|---|---|---|
| OpenAI | Chat, TTS, Embedding, Whisper | Bearer token |
| Anthropic | Chat | Bearer token |
| Google Gemini | Chat, Embedding | Bearer token |
| Groq | Chat | Bearer token |
| Mistral | Chat, Embedding | Bearer token |
| ElevenLabs | Text-to-Speech | xi-api-key (automatic) |
| OpenRouter | Chat (multi-model) | Bearer token |
| Ollama | Chat (local) | None required |
| Custom | Any | Bearer or custom headers |
Models
One default model per purpose.
Each model stores its purpose, max tokens, tool support, and vision capability. Mark one as default per purpose. Call get_default_model("chat") from any module to get it.
model = self.env["ai.llm.model"].get_default_model("chat")
# Use it
result = model.provider_id.chat_completion(
model.model_id,
messages,
tools=tools if model.supports_tools else None,
)
ODX LLM
Multi-provider LLM integration for Odoo 19
Please log in to comment on this module