Docker Configuration Guide
This guide provides a deep dive into the Docker setup for self-hosted Papr Memory, including architecture, configuration, and customization options.
Overview
The Papr Memory stack consists of five core services orchestrated via Docker Compose:
Service Architecture
Papr Memory API (FastAPI)
Port: 5001
Main API server that handles all memory operations.
papr-memory:
build: .
ports:
- "5001:5001"
environment:
- PORT=5001
- MONGODB_URL=mongodb://mongodb:27017/papr_memory
- NEO4J_URL=bolt://neo4j:7687
- QDRANT_URL=http://qdrant:6333
- REDIS_URL=redis://redis:6379
- PARSE_SERVER_URL=http://parse-server:1337
depends_on:
- mongodb
- neo4j
- qdrant
- redis
- parse-server
restart: unless-stoppedKey Responsibilities:
- Memory CRUD operations
- Vector search via Qdrant
- Graph traversal via Neo4j
- Metadata storage via MongoDB
- Caching via Redis
- User authentication via Parse
MongoDB
Port: 27017
Image: mongo:8.0.12
Document database for metadata storage.
mongodb:
image: mongo:8.0.12
ports:
- "27017:27017"
volumes:
- mongodb_data:/data/db
environment:
- MONGO_INITDB_DATABASE=papr_memory
restart: unless-stoppedStores:
- Memory metadata
- User profiles
- Document metadata
- Search history
- Custom schemas
- Parse Server data
Production Recommendations:
mongodb:
image: mongo:8.0.12
ports:
- "127.0.0.1:27017:27017" # Bind to localhost only
volumes:
- mongodb_data:/data/db
- ./mongodb/mongod.conf:/etc/mongod.conf # Custom config
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_PASSWORD}
- MONGO_INITDB_DATABASE=papr_memory
command: ["--auth", "--bind_ip_all"]
restart: unless-stopped
mem_limit: 2g
cpus: 2Neo4j
Ports: 7474 (HTTP), 7687 (Bolt)
Image: neo4j:2025.10.1
Graph database for knowledge graph storage.
neo4j:
image: neo4j:2025.10.1
ports:
- "7474:7474" # HTTP
- "7687:7687" # Bolt
volumes:
- neo4j_data:/data
- neo4j_logs:/logs
environment:
- NEO4J_AUTH=neo4j/password
- NEO4J_dbms_memory_pagecache_size=1G
- NEO4J_dbms_memory_heap_initial__size=1G
- NEO4J_dbms_memory_heap_max__size=2G
restart: unless-stoppedStores:
- Entity nodes
- Relationship edges
- Knowledge graphs
- Schema definitions
Production Recommendations:
neo4j:
image: neo4j:2025.10.1
ports:
- "127.0.0.1:7474:7474"
- "127.0.0.1:7687:7687"
volumes:
- neo4j_data:/data
- neo4j_logs:/logs
- neo4j_plugins:/plugins
environment:
- NEO4J_AUTH=neo4j/${NEO4J_PASSWORD}
- NEO4J_dbms_memory_pagecache_size=4G
- NEO4J_dbms_memory_heap_initial__size=2G
- NEO4J_dbms_memory_heap_max__size=4G
- NEO4J_dbms_security_procedures_unrestricted=apoc.*
- NEO4J_dbms_security_procedures_allowlist=apoc.*
restart: unless-stopped
mem_limit: 6g
cpus: 4Qdrant
Port: 6333
Image: qdrant/qdrant:v1.16.0
Vector database for semantic search.
qdrant:
image: qdrant/qdrant:v1.16.0
ports:
- "6333:6333"
volumes:
- qdrant_data:/qdrant/storage
restart: unless-stoppedStores:
- Vector embeddings (1536 dimensions for OpenAI)
- Memory content vectors
- Entity vectors
- Custom collection vectors
Production Recommendations:
qdrant:
image: qdrant/qdrant:v1.16.0
ports:
- "127.0.0.1:6333:6333"
volumes:
- qdrant_data:/qdrant/storage
- ./qdrant/config.yaml:/qdrant/config/production.yaml
environment:
- QDRANT__SERVICE__GRPC_PORT=6334
- QDRANT__SERVICE__HTTP_PORT=6333
restart: unless-stopped
mem_limit: 4g
cpus: 4Redis
Port: 6379
Image: redis:7-alpine
In-memory cache for performance optimization.
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
restart: unless-stoppedCaches:
- Frequently accessed memories
- Search results
- Predictive context
- Session data
- Rate limiting data
Production Recommendations:
redis:
image: redis:7-alpine
ports:
- "127.0.0.1:6379:6379"
volumes:
- redis_data:/data
command: >
redis-server
--appendonly yes
--maxmemory 2gb
--maxmemory-policy allkeys-lru
--requirepass ${REDIS_PASSWORD}
restart: unless-stopped
mem_limit: 2gParse Server
Port: 1337
Image: parseplatform/parse-server:8.4.0
Authentication and user management backend.
parse-server:
image: parseplatform/parse-server:8.4.0
ports:
- "1337:1337"
environment:
- PARSE_SERVER_APPLICATION_ID=papr-oss-app-id
- PARSE_SERVER_MASTER_KEY=papr-oss-master-key
- PARSE_SERVER_DATABASE_URI=mongodb://mongodb:27017/papr_memory
- PARSE_SERVER_URL=http://parse-server:1337/parse
depends_on:
- mongodb
restart: unless-stoppedHandles:
- User authentication
- Session management
- API key validation
- Role-based access control
Production Recommendations:
parse-server:
image: parseplatform/parse-server:8.4.0
ports:
- "127.0.0.1:1337:1337"
environment:
- PARSE_SERVER_APPLICATION_ID=${PARSE_APP_ID}
- PARSE_SERVER_MASTER_KEY=${PARSE_MASTER_KEY}
- PARSE_SERVER_DATABASE_URI=mongodb://mongodb:27017/papr_memory
- PARSE_SERVER_URL=${PARSE_SERVER_URL}
- PARSE_SERVER_CLOUD=/parse/cloud/main.js
depends_on:
- mongodb
restart: unless-stopped
mem_limit: 1gParse Dashboard (Optional)
Port: 4040
Image: parseplatform/parse-dashboard:7.2.0
Web UI for managing Parse Server data.
parse-dashboard:
image: parseplatform/parse-dashboard:7.2.0
ports:
- "4040:4040"
environment:
- PARSE_DASHBOARD_APP_ID=papr-oss-app-id
- PARSE_DASHBOARD_MASTER_KEY=papr-oss-master-key
- PARSE_DASHBOARD_SERVER_URL=http://parse-server:1337/parse
- PARSE_DASHBOARD_APP_NAME=Papr Memory
profiles:
- dashboard
depends_on:
- parse-server
restart: unless-stoppedEnable with:
docker-compose --profile dashboard up -dComplete docker-compose.yml
Here's a production-ready configuration:
version: '3.8'
services:
papr-memory:
build: .
container_name: papr-memory
ports:
- "5001:5001"
environment:
- PORT=5001
- DEBUG=${DEBUG:-false}
- ENVIRONMENT=${ENVIRONMENT:-production}
- MONGODB_URL=mongodb://mongodb:27017/papr_memory
- NEO4J_URL=bolt://neo4j:7687
- NEO4J_USER=neo4j
- NEO4J_PASSWORD=${NEO4J_PASSWORD}
- QDRANT_URL=http://qdrant:6333
- REDIS_URL=redis://redis:6379
- PARSE_SERVER_URL=http://parse-server:1337/parse
- PARSE_SERVER_APP_ID=${PARSE_APP_ID}
- PARSE_SERVER_MASTER_KEY=${PARSE_MASTER_KEY}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- GROQ_API_KEY=${GROQ_API_KEY}
- DEEPINFRA_API_KEY=${DEEPINFRA_API_KEY}
depends_on:
- mongodb
- neo4j
- qdrant
- redis
- parse-server
restart: unless-stopped
networks:
- papr-network
mongodb:
image: mongo:8.0.12
container_name: papr-mongodb
ports:
- "127.0.0.1:27017:27017"
volumes:
- mongodb_data:/data/db
environment:
- MONGO_INITDB_DATABASE=papr_memory
restart: unless-stopped
networks:
- papr-network
neo4j:
image: neo4j:2025.10.1
container_name: papr-neo4j
ports:
- "127.0.0.1:7474:7474"
- "127.0.0.1:7687:7687"
volumes:
- neo4j_data:/data
- neo4j_logs:/logs
environment:
- NEO4J_AUTH=neo4j/${NEO4J_PASSWORD}
- NEO4J_dbms_memory_pagecache_size=2G
- NEO4J_dbms_memory_heap_max__size=2G
restart: unless-stopped
networks:
- papr-network
qdrant:
image: qdrant/qdrant:v1.16.0
container_name: papr-qdrant
ports:
- "127.0.0.1:6333:6333"
volumes:
- qdrant_data:/qdrant/storage
restart: unless-stopped
networks:
- papr-network
redis:
image: redis:7-alpine
container_name: papr-redis
ports:
- "127.0.0.1:6379:6379"
volumes:
- redis_data:/data
command: redis-server --appendonly yes
restart: unless-stopped
networks:
- papr-network
parse-server:
image: parseplatform/parse-server:8.4.0
container_name: papr-parse
ports:
- "127.0.0.1:1337:1337"
environment:
- PARSE_SERVER_APPLICATION_ID=${PARSE_APP_ID}
- PARSE_SERVER_MASTER_KEY=${PARSE_MASTER_KEY}
- PARSE_SERVER_DATABASE_URI=mongodb://mongodb:27017/papr_memory
- PARSE_SERVER_URL=http://parse-server:1337/parse
depends_on:
- mongodb
restart: unless-stopped
networks:
- papr-network
parse-dashboard:
image: parseplatform/parse-dashboard:7.2.0
container_name: papr-parse-dashboard
ports:
- "4040:4040"
environment:
- PARSE_DASHBOARD_APP_ID=${PARSE_APP_ID}
- PARSE_DASHBOARD_MASTER_KEY=${PARSE_MASTER_KEY}
- PARSE_DASHBOARD_SERVER_URL=http://parse-server:1337/parse
- PARSE_DASHBOARD_APP_NAME=Papr Memory
profiles:
- dashboard
depends_on:
- parse-server
restart: unless-stopped
networks:
- papr-network
volumes:
mongodb_data:
neo4j_data:
neo4j_logs:
qdrant_data:
redis_data:
networks:
papr-network:
driver: bridgeVolume Management
Data Persistence
All data is persisted in named Docker volumes:
# List volumes
docker volume ls
# Inspect a volume
docker volume inspect memory-opensource_mongodb_data
# Backup a volume
docker run --rm \
-v memory-opensource_mongodb_data:/data \
-v $(pwd)/backups:/backup \
alpine tar czf /backup/mongodb-$(date +%Y%m%d).tar.gz /data
# Restore a volume
docker run --rm \
-v memory-opensource_mongodb_data:/data \
-v $(pwd)/backups:/backup \
alpine tar xzf /backup/mongodb-20240101.tar.gz -C /Backup Strategy
#!/bin/bash
# backup.sh - Backup all Papr Memory data
BACKUP_DIR="./backups/$(date +%Y%m%d-%H%M%S)"
mkdir -p $BACKUP_DIR
# Backup MongoDB
docker exec papr-mongodb mongodump --out /tmp/backup
docker cp papr-mongodb:/tmp/backup $BACKUP_DIR/mongodb
# Backup Neo4j
docker exec papr-neo4j neo4j-admin database dump neo4j \
--to-path=/tmp/backup
docker cp papr-neo4j:/tmp/backup $BACKUP_DIR/neo4j
# Backup Qdrant (via snapshot)
curl -X POST http://localhost:6333/snapshots
# Download snapshot via API
# Backup Redis
docker exec papr-redis redis-cli SAVE
docker cp papr-redis:/data/dump.rdb $BACKUP_DIR/redis/
echo "Backup completed: $BACKUP_DIR"Networking
Internal Network
Services communicate via Docker network:
networks:
papr-network:
driver: bridge
ipam:
config:
- subnet: 172.28.0.0/16Port Bindings
For security, bind to localhost in production:
ports:
- "127.0.0.1:5001:5001" # Only accessible locallyThen use nginx/Caddy as reverse proxy:
# nginx.conf
server {
listen 80;
server_name memory.yourdomain.com;
location / {
proxy_pass http://localhost:5001;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}Resource Limits
Memory Limits
services:
papr-memory:
mem_limit: 2g
mem_reservation: 1g
mongodb:
mem_limit: 2g
neo4j:
mem_limit: 4g
qdrant:
mem_limit: 3g
redis:
mem_limit: 1gCPU Limits
services:
papr-memory:
cpus: 2
mongodb:
cpus: 2
neo4j:
cpus: 4
qdrant:
cpus: 4Health Checks
Add health checks for automatic restarts:
papr-memory:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5001/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
mongodb:
healthcheck:
test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
interval: 30s
timeout: 10s
retries: 3
neo4j:
healthcheck:
test: ["CMD", "cypher-shell", "RETURN 1"]
interval: 30s
timeout: 10s
retries: 3Common Commands
# Start all services
docker-compose up -d
# Start with dashboard
docker-compose --profile dashboard up -d
# View logs
docker-compose logs -f papr-memory
# Restart a service
docker-compose restart papr-memory
# Stop all services
docker-compose down
# Stop and remove volumes (⚠️ deletes data)
docker-compose down -v
# Rebuild after code changes
docker-compose build --no-cache papr-memory
docker-compose up -d papr-memory
# Check service status
docker-compose ps
# Execute command in container
docker-compose exec papr-memory python manage.pyTroubleshooting
Check Container Logs
# All services
docker-compose logs
# Specific service
docker-compose logs papr-memory
# Follow logs
docker-compose logs -f --tail=100Check Service Health
# Check all services
docker-compose ps
# Test individual services
curl http://localhost:5001/health
curl http://localhost:6333/health
docker exec papr-mongodb mongosh --eval "db.adminCommand('ping')"
docker exec papr-neo4j cypher-shell "RETURN 1"
docker exec papr-redis redis-cli pingNetwork Issues
# Inspect network
docker network inspect memory-opensource_papr-network
# Test connectivity between containers
docker-compose exec papr-memory ping mongodb
docker-compose exec papr-memory nc -zv neo4j 7687