DocumentationNeuronDB Ecosystem

Docker Quick Start

Introduction

This guide gets the complete NeuronDB ecosystem running in under 5 minutes using Docker Compose. The ecosystem includes:

  • NeuronDB - PostgreSQL extension with vector search, ML inference, and GPU acceleration
  • NeuronAgent - REST API and WebSocket agent runtime with long-term memory
  • NeuronMCP - Model Context Protocol server with 100+ tools for MCP-compatible clients
  • NeuronDesktop - Unified web interface for managing all components

Why Docker? Docker provides the easiest and most consistent setup, with automatic networking, configuration, and GPU support across platforms.

Prerequisites

Before starting, verify you have:

  • Docker 20.10+ and Docker Compose 2.0+ installed
  • 4GB+ RAM available (8GB recommended)
  • Ports available: 5433 (PostgreSQL), 8080 (NeuronAgent), 8081 (NeuronDesktop API), 3000 (NeuronDesktop UI)
  • Optional: NVIDIA Docker runtime (CUDA), ROCm drivers (AMD), or Metal support (macOS/Apple Silicon)

Verify Docker installation

docker --version
docker compose version

Quick Start (5 minutes)

Start the complete NeuronDB ecosystem with a single command:

Step 1: Clone the Repository

Clone repository

git clone https://github.com/neurondb-ai/neurondb.git
cd neurondb

Step 2: Start All Services

Start ecosystem (CPU profile)

# Start all services with CPU profile (default)
docker compose up -d

This command will:

  • Build all Docker images (first time only, takes 5-10 minutes)
  • Start PostgreSQL with NeuronDB extension
  • Start NeuronAgent (REST API server on port 8080)
  • Start NeuronMCP (MCP protocol server)
  • Start NeuronDesktop (web UI on port 3000, API on 8081)
  • Configure networking between all components

Step 3: Check Service Status

Verify all services are running

docker compose ps

You should see five services running with "healthy" status:

  • neurondb-cpu - PostgreSQL with NeuronDB extension
  • neuronagent - REST API server
  • neurondb-mcp - MCP protocol server
  • neurondesk-api - NeuronDesktop API server
  • neurondesk-frontend - NeuronDesktop web interface

Wait 30-60 seconds for all services to initialize and show "healthy" status.

Verify Services

Run these quick verification commands to confirm everything is working:

Test 1: NeuronDB Extension

Verify NeuronDB extension

docker compose exec neurondb psql -U neurondb -d neurondb -c "SELECT neurondb.version();"

Expected output: 1.0.0

Test 2: NeuronAgent REST API

Check NeuronAgent health

curl http://localhost:8080/health

Expected output: {"status":"ok"}

Test 3: NeuronDesktop API

Check NeuronDesktop API

curl http://localhost:8081/health

Expected output: JSON response with status information

Test 4: First Vector Query

Create extension and test vector search

-- Connect to database
docker compose exec -T neurondb psql -U neurondb -d neurondb <<EOF

-- Create extension
CREATE EXTENSION IF NOT EXISTS neurondb;

-- Create a test table
CREATE TABLE IF NOT EXISTS documents (
  id SERIAL PRIMARY KEY,
  content TEXT,
  embedding vector(1536)
);

-- Insert sample document
INSERT INTO documents (content, embedding)
VALUES ('Hello, NeuronDB!', '[0.1, 0.2, 0.3]'::vector)
ON CONFLICT DO NOTHING;

-- Verify data
SELECT id, content FROM documents;

EOF

GPU Profiles

The Docker Compose setup supports multiple GPU profiles for accelerated operations. Choose the profile that matches your hardware:

CPU Profile (Default)

CPU-only setup

docker compose up -d

Uses port 5433 for PostgreSQL.

CUDA Profile (NVIDIA GPU)

CUDA GPU acceleration

docker compose --profile cuda up -d

Requires NVIDIA Docker runtime. Uses port 5434 for PostgreSQL. See CUDA GPU Support for setup details.

ROCm Profile (AMD GPU)

ROCm GPU acceleration

docker compose --profile rocm up -d

Requires ROCm drivers. Uses port 5435 for PostgreSQL. See ROCm GPU Support for setup details.

Metal Profile (Apple Silicon)

Metal GPU acceleration (macOS)

docker compose --profile metal up -d

For macOS with Apple Silicon (M1/M2/M3). Uses port 5436 for PostgreSQL. See Metal GPU Support for setup details.

Note: You can run multiple profiles simultaneously on different ports. For example, run both CPU and CUDA profiles side-by-side for testing.

Service URLs & Access

After starting services, access them at:

ServiceURL / ConnectionDescription
NeuronDBpostgresql://neurondb:neurondb@localhost:5433/neurondbPostgreSQL with NeuronDB extension
NeuronAgenthttp://localhost:8080REST API and WebSocket endpoints
NeuronMCPstdio (JSON-RPC 2.0)MCP protocol server for MCP clients
NeuronDesktop UIhttp://localhost:3000Web-based management interface
NeuronDesktop APIhttp://localhost:8081Backend API for NeuronDesktop

Common Commands

Service management

# Stop all services (keep data)
docker compose down

# Stop and remove all data volumes
docker compose down -v

# View logs from all services
docker compose logs -f

# View logs from specific service
docker compose logs -f neurondb
docker compose logs -f neuronagent

# Restart a specific service
docker compose restart neurondb

Next Steps

Now that your ecosystem is running, explore these resources:

Troubleshooting

Having issues? Check these common problems:

Services Won't Start

Check logs

docker compose logs neurondb
docker compose logs neuronagent
docker compose logs neurondb-mcp

Port Already in Use

If ports 5433, 8080, 8081, or 3000 are in use, modify docker-compose.yml or stop conflicting services.

Out of Memory

Ensure Docker has at least 4GB RAM allocated (8GB+ recommended). Check Docker Desktop → Settings → Resources.

GPU Not Detected

For CUDA: Verify NVIDIA Docker runtime is installed. For ROCm: Check that ROCm drivers are available. See GPU Documentation for detailed setup instructions.

For more help, see the Troubleshooting Guide or check service logs with docker compose logs.