# Deployment

## 📊 Model Deployment Memory and Supported Model Size Reference Guide

*Note: "B" in the table represents "billion parameters model". Data shown are examples only; actual supported model sizes may vary depending on system optimization, deployment environment, and other hardware/software conditions.*

| Memory (GB) | Docker Deployment (Windows/Linux) | Docker Deployment (Mac) | Integrated Setup (Windows/Linux) | Integrated Setup (Mac) |
| ----------- | --------------------------------- | ----------------------- | -------------------------------- | ---------------------- |
| 8           | \~0.8B (example)                  | \~0.4B (example)        | \~1.0B (example)                 | \~0.6B (example)       |
| 16          | 1.5B (example)                    | 0.5B (example)          | \~2.0B (example)                 | \~0.8B (example)       |
| 32          | \~2.8B (example)                  | \~1.2B (example)        | \~3.5B (example)                 | \~1.5B (example)       |

> **Note**: Models below 0.5B may not provide satisfactory performance for complex tasks. And we're continuously improving cross-platform support - please [submit an issue](https://github.com/mindverse/Second-Me/issues/new) for feedback or compatibility problems on different operating systems.

> **MLX Acceleration**: Mac M-series users can use [MLX](https://github.com/mindverse/Second-Me/tree/master/lpm_kernel/L2/mlx_training) to run larger models (CLI-only).

## 🐳 Option 1: Docker Setup

> **Note**: Docker setup on Mac M-Series chips has 25-30% performance overhead compared to integrated setup, but offers easier installation process.

### **Prerequisites**

* Docker and Docker Compose installed on your system
  * For Docker installation: [Get Docker](https://docs.docker.com/get-docker/)
  * For Docker Compose installation: [Install Docker Compose](https://docs.docker.com/compose/install/)
* For Windows Users: You can use [MinGW](https://www.mingw-w64.org/) to run `make` commands. You may need to modify the Makefile by replacing Unix-specific commands with Windows-compatible alternatives.
* Memory Usage Settings (important):
  * Configure these settings in Docker Desktop (macOS) or Docker Desktop (Windows) at: Dashboard -> Settings -> Resources
  * Make sure to allocate sufficient memory resources (at least 8GB recommended)

### **Setup Steps**

1. Clone the repository

```bash
git clone git@github.com:Mindverse/Second-Me.git
cd Second-Me
```

2. Start the containers

```bash
make docker-up
```

3. After starting the service (either with local setup or Docker), open your browser and visit:

```bash
http://localhost:3000
```

4. View help and more commands

```bash
make help
```

5. For custom Ollama model configuration, please refer to:Custom Model Config(Ollama)

## 🚀 Option 2: Integrated Setup (Non-Docker)

> **Note**: Integrated Setup provides best performance, especially for larger models, as it runs directly on your host system without containerization overhead.

### **Prerequisites**

* Python 3.12+ installed on your system (using uv)
* Node.js 23+ and npm installed
* Basic build tools (cmake, make, etc.)

### **Setup Steps**

1. Clone the repository

```bash
git clone git@github.com:Mindverse/Second-Me.git
cd Second-Me
```

2. Setup Python Environment Using uv

```bash
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh

# Create virtual environment with Python 3.12
uv venv --python 3.12

# Activate the virtual environment
source .venv/bin/activate  # Unix/macOS
# or
# .venv\Scripts\activate  # Windows
```

3. Install dependencies

```bash
make setup
```

4. Start all services

```bash
make restart
```

5. After services are started, open your browser and visit:

```bash
http://localhost:3000
```

> 💡 **Advantages**: This method offers better performance than Docker on Mac & Linux systems while still providing a simple setup process. It installs directly on your host system without containerization overhead. (Windows not tested)
