second-me
  • Welcome
  • Tutorial
  • What's new
  • FAQ
  • GUIDES
    • Deployment
    • Create Second Me
      • Support Model Config
    • Second Me Service
      • MCP Server Config Guideline
  • Second Me Research
    • AI-native Memory 2.0: Second Me
    • AI-native Memory: A Pathway from LLMs Towards AGI
Powered by GitBook
On this page
  • 📊 Model Deployment Memory and Supported Model Size Reference Guide
  • 🐳 Option 1: Docker Setup
  • Prerequisites
  • Setup Steps
  • 🚀 Option 2: Integrated Setup (Non-Docker)
  • Prerequisites
  • Setup Steps
  1. GUIDES

Deployment

PreviousFAQNextCreate Second Me

Last updated 13 days ago

📊 Model Deployment Memory and Supported Model Size Reference Guide

Note: "B" in the table represents "billion parameters model". Data shown are examples only; actual supported model sizes may vary depending on system optimization, deployment environment, and other hardware/software conditions.

Memory (GB)
Docker Deployment (Windows/Linux)
Docker Deployment (Mac)
Integrated Setup (Windows/Linux)
Integrated Setup (Mac)

8

~0.8B (example)

~0.4B (example)

~1.0B (example)

~0.6B (example)

16

1.5B (example)

0.5B (example)

~2.0B (example)

~0.8B (example)

32

~2.8B (example)

~1.2B (example)

~3.5B (example)

~1.5B (example)

Note: Models below 0.5B may not provide satisfactory performance for complex tasks. And we're continuously improving cross-platform support - please for feedback or compatibility problems on different operating systems.

MLX Acceleration: Mac M-series users can use to run larger models (CLI-only).

🐳 Option 1: Docker Setup

Note: Docker setup on Mac M-Series chips has 25-30% performance overhead compared to integrated setup, but offers easier installation process.

Prerequisites

  • Docker and Docker Compose installed on your system

    • For Docker installation:

    • For Docker Compose installation:

  • For Windows Users: You can use to run make commands. You may need to modify the Makefile by replacing Unix-specific commands with Windows-compatible alternatives.

  • Memory Usage Settings (important):

    • Configure these settings in Docker Desktop (macOS) or Docker Desktop (Windows) at: Dashboard -> Settings -> Resources

    • Make sure to allocate sufficient memory resources (at least 8GB recommended)

Setup Steps

  1. Clone the repository

git clone git@github.com:Mindverse/Second-Me.git
cd Second-Me
  1. Start the containers

make docker-up
  1. After starting the service (either with local setup or Docker), open your browser and visit:

http://localhost:3000
  1. View help and more commands

make help
  1. For custom Ollama model configuration, please refer to:Custom Model Config(Ollama)

🚀 Option 2: Integrated Setup (Non-Docker)

Note: Integrated Setup provides best performance, especially for larger models, as it runs directly on your host system without containerization overhead.

Prerequisites

  • Python 3.12+ installed on your system (using uv)

  • Node.js 23+ and npm installed

  • Basic build tools (cmake, make, etc.)

Setup Steps

  1. Clone the repository

git clone git@github.com:Mindverse/Second-Me.git
cd Second-Me
  1. Setup Python Environment Using uv

# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh

# Create virtual environment with Python 3.12
uv venv --python 3.12

# Activate the virtual environment
source .venv/bin/activate  # Unix/macOS
# or
# .venv\Scripts\activate  # Windows
  1. Install dependencies

make setup
  1. Start all services

make restart
  1. After services are started, open your browser and visit:

http://localhost:3000

💡 Advantages: This method offers better performance than Docker on Mac & Linux systems while still providing a simple setup process. It installs directly on your host system without containerization overhead. (Windows not tested)

submit an issue
MLX
Get Docker
Install Docker Compose
MinGW