A shared persistent memory for every AI tool you use
MCP Server • pgvector • Local LLMs • 100% Local • ~$0/month
You're the only one remembering anything. Your AI tools should be doing that for you.
One shared brain that every agent reads and writes to, automatically, silently, without you lifting a finger.
Your thought / conversation | v [remember / capture_context] | +---> Ollama --> vector embedding (768 dims) +---> LLM / heuristic --> metadata (type, people, topics) +---> project scope --> filter by project | v PostgreSQL + pgvector (annotations, ratings, access tracking) | v MCP Server (stdio / HTTP) -- 19 tools | +-------+--------+--------+--------+ v v v v v Claude Cursor Windsurf Copilot ChatGPT
Agents capture without being asked. You never say "remember this."
capture_context
Agents search the brain before starting tasks. Prior context surfaces automatically.
search("authentication token validation")
"Why does my AI suddenly remember my project?" Because the brain captured it automatically.
GPU-aware processing that avoids model thrashing on local hardware
| Scenario | Naive Approach | Open Brain |
|---|---|---|
| 5 memories, dual model | ~200s (constant GPU swaps) | ~15s (batched) |
| Model loads per capture | 2 × N memories | 2 total |
| Embedding + metadata | Interleaved (thrashing) | Phased (embed all, then extract all) |
Real-time monitoring without polling. Event-driven via PostgreSQL LISTEN/NOTIFY.
Double-click the Desktop shortcut. Everything auto-starts if needed.
Your AI tools should never forget.
Docs: shep-engineering.github.io/open-brain
GitHub: github.com/shep-engineering/open-brain
Stack: Python • MCP • pgvector • Ollama • Docker
The user's only job is to work.
The brain's job is to remember everything.