Support Model Config
Last updated
Last updated
First, download and install Ollama from the official website:
🔗 Download Link:
📚 Additional Resources:
Official Website:
Model Library:
GitHub Repository:
ollama pull model_name
Download a model
ollama serve
Start the Ollama service
ollama ps
List running models
ollama list
List all downloaded models
ollama rm model_name
Remove a model
ollama show model_name
Show model details
Chat Request
Embedding Request
Start the Ollama service: ollama serve
Check your Ollama embedding model context length:
Modify EMBEDDING_MAX_TEXT_LENGTH
in Second_Me/.env
to match your embedding model's context window. This prevents chunk length overflow and avoids server-side errors (500 Internal Server Error).
Configure Custom Embedding in Settings
When running Second Me in Docker environments, please replace 127.0.0.1
in API Endpoint with host.docker.internal
:
More Details: