Support Model Config
Custom Model Endpoint Guide with Ollama
1. Prerequisites: Ollama Setup
First, download and install Ollama from the official website:
π Download Link: https://ollama.com/download
π Additional Resources:
Official Website: https://ollama.com
Model Library: https://ollama.com/library
GitHub Repository: https://github.com/ollama/ollama/
2. Basic Ollama Commands
ollama pull model_name
Download a model
ollama serve
Start the Ollama service
ollama ps
List running models
ollama list
List all downloaded models
ollama rm model_name
Remove a model
ollama show model_name
Show model details
3. Using Ollama API for Custom Model
OpenAI-Compatible API
Chat Request
Embedding Request
More Details: https://github.com/ollama/ollama/blob/main/docs/openai.md
4. Configuring Custom Embedding in Second Me
Start the Ollama service:
ollama serveCheck your Ollama embedding model context length:
Modify
EMBEDDING_MAX_TEXT_LENGTHinSecond_Me/.envto match your embedding model's context window. This prevents chunk length overflow and avoids server-side errors (500 Internal Server Error).
Configure Custom Embedding in Settings
When running Second Me in Docker environments, please replace 127.0.0.1 in API Endpoint with host.docker.internal:
Last updated