Installation
Get Open Brain running in ~15 minutes.
Prerequisites
| Tool | Purpose | Install |
|---|---|---|
| Python 3.10+ | Runs the MCP server | python.org |
| Docker Desktop | Hosts PostgreSQL + pgvector | docker.com |
| Ollama | Local embeddings (free) | ollama.com |
Step 1: Clone and create a virtual environment
Activate it:
Install dependencies:
Step 2: Configure environment
On Windows: copy .env.example .env
Edit .env. The defaults work out of the box for local Ollama + Docker.
WSL + Windows Ollama
If you run Ollama on Windows and the MCP server from WSL, enable mirrored networking in C:\Users\<USERNAME>\.wslconfig:
Then run wsl --shutdown and restart WSL.
Step 3: Start PostgreSQL
Verify with docker ps. You should see open-brain-db running.
Step 4: Initialize the database
This creates the memories table, pgvector extension, HNSW index, and all supporting indexes.
Step 5: Pull the embedding model
Optional but recommended. Pull a metadata LLM for richer extraction:
Dual model setup
If you use both nomic-embed-text and a metadata model, start Ollama with OLLAMA_MAX_LOADED_MODELS=2 to avoid repeated model evictions.
Step 6: Verify
The server should start without errors. Now wire it into your AI tools.