Skip to Content
Menu
v 16.0 Third Party 11
Download for v 16.0 Deploy on Odoo.sh
Availability
Odoo Online
Odoo.sh
On Premise
Odoo Apps Dependencies Discuss (mail)
Community Apps Dependencies
Lines of code 4657
Technical Name llm_knowledge
LicenseLGPL-3
Websitehttps://github.com/apexive/odoo-llm
You bought this module and need support? Click here!
Availability
Odoo Online
Odoo.sh
On Premise
Odoo Apps Dependencies Discuss (mail)
Community Apps Dependencies
Lines of code 4657
Technical Name llm_knowledge
LicenseLGPL-3
Websitehttps://github.com/apexive/odoo-llm
LLM Knowledge Banner

LLM Knowledge

Retrieval Augmented Generation for LLM with Vector Search

Key Features

Document Collections

Organize resources into collections for targeted knowledge retrieval

Document Chunking

Split documents into manageable chunks for more precise retrieval

Vector Embeddings

Generate embeddings for semantic search capabilities

Vector Store Integration

Seamless integration with multiple vector database options:

PgVector

Native PostgreSQL vector storage and search with high performance

Chroma

Integration with Chroma vector database for flexible deployment

Qdrant

Support for Qdrant vector search engine with advanced filtering

User Interface

Access and manage knowledge collections through a simple, intuitive interface:

  1. Access Collections: Navigate to LLM > Knowledge > Collections to view and manage all collections
  2. View Resources: Access LLM > Knowledge > Resources to manage document resources
  3. Explore Chunks: Browse LLM > Knowledge > Chunks to see individual document chunks
  4. Upload Documents: Use the "Upload Resource" wizard to easily add new documents

Collection Workflow

Create and manage knowledge collections with a streamlined workflow:

  1. Create Collection: Click "Create" button to add a new collection, specifying the embedding model
  2. Add Resources: Add resources manually or configure domain filters for automatic inclusion
  3. Process Resources: Click "Process Resources" to retrieve, parse, chunk, and embed documents
  4. Configure RAG: Enable the collection for use in LLM threads for knowledge retrieval
  5. Manage Chunks: View and manage individual chunks through the Chunks menu

RAG Process

Document Processing

Documents are retrieved, parsed, and split into manageable chunks

Embedding Generation

Chunks are converted to vector embeddings using the specified model

Semantic Retrieval

LLM queries retrieve the most relevant chunks for enhanced responses

For more information, visit our GitHub repository

Please log in to comment on this module

  • The author can leave a single reply to each comment.
  • This section is meant to ask simple questions or leave a rating. Every report of a problem experienced while using the module should be addressed to the author directly (refer to the following point).
  • If you want to start a discussion with the author, please use the developer contact information. They can usually be found in the description.
Please choose a rating from 1 to 5 for this module.