Support Model Config

Custom Model Endpoint Guide with Ollama

1. Prerequisites: Ollama Setup

First, download and install Ollama from the official website:

πŸ”— Download Link: https://ollama.com/download

πŸ“š Additional Resources:


2. Basic Ollama Commands

Command
Description

ollama pull model_name

Download a model

ollama serve

Start the Ollama service

ollama ps

List running models

ollama list

List all downloaded models

ollama rm model_name

Remove a model

ollama show model_name

Show model details

3. Using Ollama API for Custom Model

OpenAI-Compatible API

Chat Request

Embedding Request

More Details: https://github.com/ollama/ollama/blob/main/docs/openai.md

4. Configuring Custom Embedding in Second Me

  1. Start the Ollama service: ollama serve

  2. Check your Ollama embedding model context length:

  1. Modify EMBEDDING_MAX_TEXT_LENGTH in Second_Me/.env to match your embedding model's context window. This prevents chunk length overflow and avoids server-side errors (500 Internal Server Error).

  1. Configure Custom Embedding in Settings

When running Second Me in Docker environments, please replace 127.0.0.1 in API Endpoint with host.docker.internal:

Last updated