Skip to Content
Odoo Menu
  • Sign in
  • Try it free
  • Apps
    Finance
    • Accounting
    • Invoicing
    • Expenses
    • Spreadsheet (BI)
    • Documents
    • Sign
    Sales
    • CRM
    • Sales
    • POS Shop
    • POS Restaurant
    • Subscriptions
    • Rental
    Websites
    • Website Builder
    • eCommerce
    • Blog
    • Forum
    • Live Chat
    • eLearning
    Supply Chain
    • Inventory
    • Manufacturing
    • PLM
    • Purchase
    • Maintenance
    • Quality
    Human Resources
    • Employees
    • Recruitment
    • Time Off
    • Appraisals
    • Referrals
    • Fleet
    Marketing
    • Social Marketing
    • Email Marketing
    • SMS Marketing
    • Events
    • Marketing Automation
    • Surveys
    Services
    • Project
    • Timesheets
    • Field Service
    • Helpdesk
    • Planning
    • Appointments
    Productivity
    • Discuss
    • Approvals
    • IoT
    • VoIP
    • Knowledge
    • WhatsApp
    Third party apps Odoo Studio Odoo Cloud Platform
  • Industries
    Retail
    • Book Store
    • Clothing Store
    • Furniture Store
    • Grocery Store
    • Hardware Store
    • Toy Store
    Food & Hospitality
    • Bar and Pub
    • Restaurant
    • Fast Food
    • Guest House
    • Beverage Distributor
    • Hotel
    Real Estate
    • Real Estate Agency
    • Architecture Firm
    • Construction
    • Property Management
    • Gardening
    • Property Owner Association
    Consulting
    • Accounting Firm
    • Odoo Partner
    • Marketing Agency
    • Law firm
    • Talent Acquisition
    • Audit & Certification
    Manufacturing
    • Textile
    • Metal
    • Furnitures
    • Food
    • Brewery
    • Corporate Gifts
    Health & Fitness
    • Sports Club
    • Eyewear Store
    • Fitness Center
    • Wellness Practitioners
    • Pharmacy
    • Hair Salon
    Trades
    • Handyman
    • IT Hardware & Support
    • Solar Energy Systems
    • Shoe Maker
    • Cleaning Services
    • HVAC Services
    Others
    • Nonprofit Organization
    • Environmental Agency
    • Billboard Rental
    • Photography
    • Bike Leasing
    • Software Reseller
    Browse all Industries
  • Community
    Learn
    • Tutorials
    • Documentation
    • Certifications
    • Training
    • Blog
    • Podcast
    Empower Education
    • Education Program
    • Scale Up! Business Game
    • Visit Odoo
    Get the Software
    • Download
    • Compare Editions
    • Releases
    Collaborate
    • Github
    • Forum
    • Events
    • Translations
    • Become a Partner
    • Services for Partners
    • Register your Accounting Firm
    Get Services
    • Find a Partner
    • Find an Accountant
      • Get a Tailored Demo
    • Implementation Services
    • Customer References
    • Support
    • Upgrades
    Github Youtube Twitter Linkedin Instagram Facebook Spotify
    +32 2 290 34 90
    • Get a Tailored Demo
  • Pricing
  • Help
  1. APPS
  2. Productivity
  3. AI Model Providers (China & Self-hosted) v 19.0
  4. Sales Conditions FAQ

AI Model Providers (China & Self-hosted)

by DWV
Odoo

$ 59.90

v 19.0 Third Party 2
This module requires Odoo Enterprise Edition.
Apps purchases are linked to your Odoo account, please sign in or sign up first.
Availability
Odoo Online
Odoo.sh
On Premise
Odoo Apps Dependencies • AI (ai_app)
• Discuss (mail)
Lines of code 3844
Technical Name ai_providers_extended
LicenseLGPL-3
You bought this module and need support? Click here!
Availability
Odoo Online
Odoo.sh
On Premise
Odoo Apps Dependencies • AI (ai_app)
• Discuss (mail)
Lines of code 3844
Technical Name ai_providers_extended
LicenseLGPL-3
  • Description
  • Documentation

AI Model Providers (China & Self-hosted) - 支持DeepSeek/Qwen/Kimi/GLM/Doubao等、本地大模型及自定义模型

AI Model Providers (China & Self-hosted) is a unified AI model provider layer for Odoo, designed for Chinese users, global users, and self-hosted LLM deployments. This module extends Odoo AI by providing a flexible and extensible provider architecture that supports mainstream Chinese AI models, local/self-hosted LLMs, custom APIs, and embedding models. It seamlessly integrates with all Odoo AI features such as chat, AI agents, embeddings, and RAG workflows.

AI Providers Extended - Image 1

AI Providers Extended - Cover Image

AI Providers Extended - Feature Screenshot

AI Providers Extended - Feature Screenshot

Supported AI Providers

This module adds support for the following AI providers with official API support:

  • DeepSeek - Advanced AI models for chat and coding tasks
    • Models: deepseek-chat, deepseek-coder, deepseek-reasoner
    • High performance and cost-effectiveness
  • Qwen (通义千问) - Alibaba Cloud's powerful AI model
    • Models: qwen-turbo, qwen-plus, qwen-max, qwen-max-longcontext
    • Supports both compatible mode and native DashScope API
    • Excellent Chinese language support
  • ERNIE (文心一言) - Baidu's comprehensive AI model
    • Models: ernie-bot-turbo, ernie-bot, ernie-bot-4, ernie-bot-8k
    • Strong natural language understanding
  • GLM (智谱AI) - Zhipu AI's intelligent models
    • Models: glm-4, glm-4-flash, glm-3-turbo
    • Multi-modal capabilities
  • Moonshot (月之暗面) - Moonshot AI's advanced models
    • Models: moonshot-v1-8k, moonshot-v1-32k, moonshot-v1-128k
    • Long context support (up to 128K tokens)
  • Doubao (豆包) - ByteDance's versatile AI model
    • Models: doubao-seed-1-8-251228, doubao-seed-1-6-flash-250828, doubao-seed-1-6-lite-251015, doubao-seed-1-6-251015, doubao-1-5-pro-32k-250115
    • Fast response times
  • Local LLM (本地LLM) - Support for self-hosted LLMs (Ollama, LocalAI, etc.)
    • Models: llama2, llama3, mistral, mixtral, qwen, codellama, phi, gemma, and custom models
    • OpenAI-compatible API support
    • Optional API key (Ollama doesn't require authentication by default)
    • Custom model name support
    • Default base URL: http://localhost:11434/v1 (Ollama)
    • ⚠️ Important for RAG: For optimal RAG (Retrieval-Augmented Generation) performance, it is strongly recommended to use local models with 14B parameters or larger (e.g., qwen3:14b, llama3:70b, mistral:large). Smaller models (8B or less) may not effectively utilize RAG context and may provide incomplete or inaccurate answers based on document content.

Key Features

  • Unified AI Model Provider Configuration - Manage all AI provider settings from a single location in Odoo Settings
  • Support for Chinese, Global, and Self-hosted LLMs - Comprehensive support for mainstream Chinese AI models, global providers, and local deployments
  • Flexible API Key and Base URL Configuration - Per-provider configuration with custom API endpoints support
  • Local LLM Support - Full support for self-hosted LLMs (Ollama, LocalAI, etc.) with optional API key. Up to 3 local models can be configured. Optimized configuration table with full-width layout and no horizontal scrollbar.
  • Independent and Fully Customizable Embedding Configuration - Separate configuration table for embedding services, independent from LLM providers. Dedicated "Embedding Providers (RAG使用)" section in settings.
  • Automatic Embedding Model Selection for AI Agents - Smart fallback mechanism that reuses LLM provider settings when embedding is not configured
  • Smart Model Selection - Only configured providers/models are shown in AI Agent selection
  • Custom Model Name Support - Support for self-hosted and local LLMs with custom model names
  • Provider- and Model-Level Visibility Control - Clear visibility control in AI Agent selection
  • Clear, User-Friendly Error Messages - Actionable setup guidance with clear error messages
  • Native and Compatibility Modes - Support for Qwen-style APIs with automatic mode detection
  • Automatic Request Chunking - For providers with token or length limits
  • Built-in Fallback Strategy - For embedding providers with intelligent fallback mechanisms
  • Full Support for Embedding, Vector Search, and RAG Workflows - Complete RAG support with document embedding and retrieval
  • Performance-Optimized Provider Execution - Optimized logging, caching, and connection pooling for faster response times
  • Seamless Integration - Works with all existing Odoo AI features including chat, embeddings, and RAG
  • Fully Compatible with Odoo 19 - Automatic version detection and compatibility
  • Complete Multi-Language UI Support - Full translation support for Chinese (Simplified) and English. All UI elements, labels, help texts, and error messages are fully translated.
  • Enhanced Configuration Persistence - Local LLM settings are automatically preserved across module updates

Installation

  1. Install the module from Odoo Apps
  2. Go to Settings > AI > Providers
  3. Enable and configure your desired AI providers with API keys
  4. Select your preferred LLM model in AI Agent settings

Configuration

After installation, configure your AI providers:

  1. Navigate to Settings > AI > Providers
  2. In the Cloud Service Providers section, for each LLM provider you want to use:
    • Enable the provider toggle
    • Enter your API key
    • (Optional) Set a custom API base URL if needed
  3. In the Local LLM Configuration section, configure up to 3 local LLM models:
    • Enable each local model configuration
    • Select a predefined model name or choose "Custom Model" to enter a custom model name
    • Enter the Base URL (required, e.g., http://localhost:11434/v1 for Ollama)
    • Optionally enter an API key if your local LLM service requires authentication
    • The configuration table uses full-width layout without horizontal scrollbar for better usability
    • ⚠️ For RAG Usage: If you plan to use RAG (document-based Q&A), strongly recommend using models with 14B parameters or larger (e.g., qwen3:14b, llama3:70b). Smaller models (8B or less) may not effectively utilize RAG context.
  4. In the Embedding Providers (RAG使用) section, configure embedding services independently (for RAG usage):
    • Enter the embedding model name (e.g., text-embedding-3-small, text-embedding-v2)
    • Optionally configure API key and base URL for the embedding service
    • If not configured, the corresponding LLM provider's settings will be used as fallback
    • This allows you to use different endpoints for embeddings and chat
    • Note: The embedding model name field will not show invalid default values (e.g., "api")
  5. Save the settings
  6. Only configured providers/models will appear in AI Agent LLM model selection

Automatic Embedding Association

When an AI Agent needs to use embeddings (e.g., when uploading files or using RAG):

  • Automatic Detection: The system automatically searches for configured embedding models in the Embedding Providers section
  • Priority Order:
    1. If the provider's default embedding model is configured in Embedding Providers, it will be used
    2. If not, any configured embedding model will be automatically associated
    3. If no embedding is configured, the LLM provider's API key/base URL will be used as fallback
    4. If neither embedding nor provider is configured, a clear error message will guide you to configure it
  • User-Friendly Errors: If embedding is needed but not configured, you'll receive clear instructions on how to configure it

Provider-Specific Notes

DeepSeek

  • Supports chat and coding tasks
  • For file uploads and embeddings, requires OpenAI API key configuration (uses OpenAI embedding API as fallback)
  • Pure chat scenarios work without OpenAI configuration

Qwen (通义千问)

  • Supports both native DashScope API and OpenAI-compatible API
  • Automatically detects API type based on base URL configuration
  • Default: Compatible mode (https://dashscope.aliyuncs.com/compatible-mode/v1)
  • Native API: Set custom base URL to https://dashscope.aliyuncs.com/api/v1
  • Automatic message chunking for long inputs (30720 character limit)
  • Full embedding support

ERNIE (文心一言)

  • Uses Baidu's access token authentication
  • Full embedding support

GLM, Moonshot, Doubao

  • OpenAI-compatible API format
  • Full embedding support
  • Standard Bearer token authentication

Local LLM (Ollama, LocalAI, etc.)

  • Supports self-hosted LLM servers with OpenAI-compatible API
  • API key is optional (Ollama doesn't require authentication by default)
  • Default base URL: http://localhost:11434/v1 (for Ollama)
  • Supports predefined models (llama2, llama3, mistral, etc.) and custom model names
  • Up to 3 local models can be configured simultaneously
  • Can configure separate embedding base URL if embedding service uses different endpoint
  • If embedding base URL is not configured, uses the LLM provider's base URL
  • Optimized configuration table: Full-width layout without horizontal scrollbar, with flexible column widths for better readability
  • Configuration persistence: Local LLM settings are automatically preserved across module updates with intelligent data management
  • ⚠️ Critical RAG Performance Requirement: For optimal RAG (Retrieval-Augmented Generation) performance, it is strongly recommended to use local models with 14B parameters or larger (e.g., qwen3:14b, qwen3:32b, llama3:70b, mistral:large). Smaller models (8B or less, such as qwen3:8b) may not effectively utilize RAG context and may provide incomplete, inaccurate, or generic answers that ignore the document content. While smaller models may work for basic chat, they often fail to properly understand and use the RAG context for document-based questions.

Usage

Once configured, you can use any of the supported providers in:

  • AI Chat conversations
  • AI Agent configurations
  • Document embedding and RAG
  • All other Odoo AI features

How Embedding Works

When you upload files to an AI Agent or use RAG (Retrieval-Augmented Generation):

  1. The system automatically detects which embedding model to use based on your configuration
  2. It first checks if the provider's default embedding model is configured in Embedding Providers
  3. If not found, it searches for any configured embedding model and automatically associates it
  4. If no embedding is configured, it falls back to using the LLM provider's API key and base URL
  5. If neither is configured, you'll receive a clear error message with instructions on how to configure it

Example: If you're using Qwen as your LLM provider and have configured "text-embedding-3-small" in Embedding Providers, the system will automatically use that embedding model when processing uploaded files.

⚠️ Important: Local LLM Model Size for RAG

For optimal RAG performance with local LLMs, use models with 14B parameters or larger:

  • Recommended: qwen3:14b, qwen3:32b, llama3:70b, mistral:large, and similar 14B+ models
  • Not Recommended for RAG: qwen3:8b, llama3:8b, and other 8B or smaller models
  • Why: Smaller models (8B or less) often fail to effectively utilize RAG context. They may:
    • Ignore the provided document content
    • Provide generic answers instead of using the RAG context
    • Fail to understand complex questions based on document content
    • Return incomplete or inaccurate information
  • Note: While smaller models may work for basic chat, they are not suitable for document-based Q&A with RAG. For RAG workflows, always use 14B+ models for best results.

Troubleshooting

No models available in AI Agent: Make sure you have configured at least one provider (API Key or Base URL) in Settings > AI > Providers. Only configured providers will appear in the model selection.

DeepSeek file upload issues: If you're using DeepSeek and need to upload files, make sure to configure OpenAI API key in Settings > AI > Providers. DeepSeek uses OpenAI's embedding API for file processing.

Local LLM connection issues: Ensure your local LLM server (e.g., Ollama) is running and accessible at the configured base URL. Check firewall settings if connecting from a different machine.

Local LLM not using RAG context properly: If your local LLM is not effectively using RAG context (ignoring document content, providing generic answers), this is likely due to using a model that is too small. Use models with 14B parameters or larger (e.g., qwen3:14b, llama3:70b) for optimal RAG performance. Smaller models (8B or less) often cannot effectively process and utilize RAG context, even when it is correctly provided.

Module text still in Chinese when switching to English: Make sure to update the module after installation to load the English translations. Go to Settings > Apps, find the module, and click "Upgrade".

Embedding configuration: If you need to use a different endpoint for embeddings, configure it in the Embedding Providers (RAG使用) section. The system will automatically associate the configured embedding model when needed. If not configured, the LLM provider's base URL will be used as fallback. If neither is configured, you'll receive a clear error message with configuration instructions.

API connection errors: Check your network configuration, DNS settings, and firewall rules. Some providers may require specific network configurations.

Technical Details

This module extends Odoo's AI functionality through:

  • Monkey patching of LLM API service classes
  • Model inheritance for configuration management
  • Provider-specific authentication and request handling
  • Compatible with Odoo 19.0

Security

All API keys are stored securely in Odoo's ir.config_parameter and are only accessible to system administrators.

Support

For issues, questions, or feature requests, please contact the module author or refer to the module documentation.

Version History

  • Version 1.2 - Local LLM support and enhanced configuration
    • Added Local LLM provider support (Ollama, LocalAI, etc.) - up to 3 local models
    • Independent Embedding Providers configuration table (RAG Usage)
    • Separate embedding base URL and API key configuration
    • Custom model name support for local LLMs
    • Smart model selection - only configured providers/models are shown
    • User-friendly error messages when providers are not configured
    • Multi-language support (Chinese and English) with complete UI translation
    • Improved configuration interface with better organization
    • Optimized Local LLM configuration table: Full-width layout without horizontal scrollbar, flexible column widths
    • Enhanced configuration persistence: Automatic preservation of Local LLM settings across module updates
    • Improved embedding model name handling with intelligent default value management
  • Version 1.1 - Performance improvements and optimizations
    • Optimized logging levels (reduced production overhead)
    • Added function result caching for better performance
    • Improved string operations and message processing
    • Enhanced Qwen API mode detection with caching
    • Connection pool infrastructure for future optimizations
    • Code quality improvements and better error handling
  • Version 1.0 - Initial release with support for DeepSeek, Qwen, ERNIE, GLM, Moonshot, and Doubao
    • Support for 6 AI providers with 20+ official API models
    • Qwen compatible mode and native API support
    • Automatic message chunking for Qwen API
    • DeepSeek embedding fallback to OpenAI
    • Performance optimizations (logging, caching, connection pooling)
    • Odoo 19 compatibility with automatic Provider structure detection
    • Comprehensive error handling and retry mechanisms

Model List

The module supports the following official API models (not open-source-only versions):

  • Qwen: qwen-turbo, qwen-plus, qwen-max, qwen-max-longcontext
  • GLM: glm-4, glm-4-flash, glm-3-turbo
  • DeepSeek: deepseek-chat, deepseek-coder, deepseek-reasoner
  • Moonshot: moonshot-v1-8k, moonshot-v1-32k, moonshot-v1-128k
  • Doubao: doubao-seed-1-8-251228, doubao-seed-1-6-flash-250828, doubao-seed-1-6-lite-251015, doubao-seed-1-6-251015, doubao-1-5-pro-32k-250115
  • ERNIE: ernie-bot-turbo, ernie-bot, ernie-bot-4, ernie-bot-8k
  • Local LLM: llama2, llama3, mistral, mixtral, qwen, codellama, phi, gemma, and custom models

Note: Only official API-available models are included. Open-source-only model sizes are not listed as they cannot be accessed via API. For Local LLM, you can use any model supported by your local LLM server (e.g., Ollama).

Multi-Language Support

This module supports the following languages with complete UI translation:

  • 简体中文 (Simplified Chinese) - Full Chinese interface with all labels, help texts, and error messages translated
  • English - Complete English translation for all UI elements, configuration labels, and user-facing messages

When you switch Odoo's language to English, all module-related text will automatically display in English, including:

  • Configuration section titles and labels
  • Field names and help texts
  • Error messages and validation prompts
  • Placeholder texts and tooltips

AI Providers Extended

Overview

AI Providers Extended is a powerful module that extends Odoo's built-in AI functionality to support multiple AI model providers, including popular Chinese AI services. This module seamlessly integrates with Odoo's existing AI features, allowing you to use various AI providers for LLM (Large Language Model) and embedding generation.

Features

  • Multiple Provider Support: Support for 8 AI providers including:
    • OpenAI (GPT-4, GPT-3.5, etc.)
    • Google (Gemini models)
    • Qwen (通义千问) - Alibaba Cloud
    • ERNIE (文心一言) - Baidu
    • GLM (智谱AI) - Zhipu AI
    • Moonshot (月之暗面) - Moonshot AI
    • Doubao (豆包) - ByteDance
    • DeepSeek - DeepSeek AI
  • Dynamic Configuration Interface: Easy-to-use settings page for configuring API keys and custom base URLs for each provider
  • Custom Base URLs: Support for custom API endpoints, allowing you to use proxy servers or self-hosted API services
  • Automatic Provider Detection: Automatically detects and configures providers based on API keys
  • Embedding Support: Full support for embedding generation with all providers (except DeepSeek, which uses OpenAI embeddings as fallback)
  • Seamless Integration: Works seamlessly with existing Odoo AI features including AI Agents, AI Topics, and AI Composer

Installation

  1. Install the module through Odoo Apps menu
  2. The module requires the following dependencies:
    • ai - Odoo's base AI module
    • ai_app - Odoo's AI application module
  3. After installation, go to Settings > AI > Providers to configure your API keys

Configuration

Configuring AI Providers

  1. Navigate to Settings > AI > Providers
  2. For each provider you want to use:
    • Toggle the Enable switch to activate the provider
    • Enter your API Key in the corresponding field
    • (Optional) Enter a Custom Base URL if you're using a proxy or custom endpoint
  3. Click Save to apply your changes
  4. The module will automatically:
    • Validate your API keys
    • Make the provider available in AI Agent configuration
    • Enable embedding generation for supported providers

Provider-Specific Configuration

Qwen (通义千问)
  • Default API endpoint: https://dashscope.aliyuncs.com/compatible-mode/v1
  • Supports custom base URLs for OpenAI-compatible APIs
  • Uses Authorization: Bearer for compatible mode, X-DashScope-APIKey for native API
ERNIE (文心一言)
  • Requires both API Key and Secret Key
  • Secret Key can be configured separately or combined with API Key using format: API_KEY:SECRET_KEY
  • Uses access token authentication
GLM (智谱AI)
  • Default API endpoint: https://open.bigmodel.cn/api/paas/v4
  • Standard OpenAI-compatible API
Moonshot (月之暗面)
  • Default API endpoint: https://api.moonshot.cn/v1
  • Standard OpenAI-compatible API
Doubao (豆包)
  • Default API endpoint: https://ark.cn-beijing.volces.com/api/v3
  • Standard OpenAI-compatible API
DeepSeek
  • Default API endpoint: https://api.deepseek.com/v1
  • Note: DeepSeek does not support native embedding API
  • Embedding generation will automatically use OpenAI's embedding model (requires OpenAI API key)

Using AI Providers

After configuration, you can use the extended providers in:

  • AI Agents: Select any of the supported models in the LLM Model dropdown
  • Embedding Generation: Automatic embedding generation using the configured provider's embedding model
  • AI Composer: Use extended providers for AI-powered content generation
The module automatically handles:
  • API authentication
  • Request formatting
  • Response parsing
  • Error handling
  • Tool calling (for supported providers)

Troubleshooting

Provider not appearing in dropdown
  • Ensure the module is properly installed and upgraded
  • Check that the API key is correctly configured
  • Clear browser cache and refresh the page
Authentication errors
  • Verify your API key is correct and has not expired
  • Check that the API key has the necessary permissions
  • For custom base URLs, ensure the endpoint is correct and accessible
Embedding generation fails
  • Ensure the provider supports embedding generation
  • For DeepSeek, configure OpenAI API key for embedding fallback
  • Check network connectivity and firewall settings
Module state inconsistencies
  • This is a harmless warning that can be ignored
  • If persistent, try upgrading the ai_app module manually

Technical Details

Architecture

The module extends Odoo's AI functionality through:

  • Model Inheritance: Extends ai.agent and ai.embedding models
  • View Inheritance: Extends settings views to add provider configuration
  • Monkey Patching: Extends LLMApiService to support new providers
  • Provider Registry: Dynamic provider list that can be extended

API Compatibility

Most providers use OpenAI-compatible APIs, making integration straightforward:
  • Standard Authorization: Bearer authentication
  • Compatible request/response formats
  • Tool calling support
Special cases:
  • Qwen: Supports both native and compatible mode APIs
  • ERNIE: Uses access token-based authentication
  • DeepSeek: Requires OpenAI for embedding generation

Security

  • API keys are stored securely in ir.config_parameter
  • Password fields are masked in the UI
  • Only system administrators can configure providers
  • API keys are never exposed in logs or error messages

Support

For issues, questions, or feature requests, please contact the module author.

Version History

Version 1.2
  • Added Local LLM provider support (Ollama, LocalAI, etc.)
  • Independent Embedding Providers configuration table (RAG Usage)
  • Custom model name support for local LLMs
  • Smart model selection - only configured providers/models are shown
  • Multi-language support (Chinese and English)
  • Enhanced configuration persistence across module updates
  • Optimized Local LLM configuration interface
Version 1.1
  • Performance optimizations and improvements
  • Enhanced Qwen API mode detection
  • Improved error handling and logging
Version 1.0
  • Initial release
  • Support for 20+ official API models across 6+ providers
  • Dynamic configuration interface
  • Full embedding support
  • Custom base URL support
  • Qwen compatible mode and native API support

License

This module is licensed under LGPL-3.

Please log in to comment on this module

  • The author can leave a single reply to each comment.
  • This section is meant to ask simple questions or leave a rating. Every report of a problem experienced while using the module should be addressed to the author directly (refer to the following point).
  • If you want to start a discussion with the author or have a question related to your purchase, please use the support page.
Community
  • Tutorials
  • Documentation
  • Forum
Open Source
  • Download
  • Github
  • Runbot
  • Translations
Services
  • Odoo.sh Hosting
  • Support
  • Upgrade
  • Custom Developments
  • Education
  • Find an Accountant
  • Find a Partner
  • Become a Partner
About us
  • Our company
  • Brand Assets
  • Contact us
  • Jobs
  • Events
  • Podcast
  • Blog
  • Customers
  • Legal • Privacy
  • Security

Odoo is a suite of open source business apps that cover all your company needs: CRM, eCommerce, accounting, inventory, point of sale, project management, etc.

Odoo's unique value proposition is to be at the same time very easy to use and fully integrated.

Website made with