Last updated

Self-Hosted Deployment Guide

Deploy Papr Memory on your own infrastructure with full control over your data and environment. Run the same Predictive Memory Graph that ranks #1 on Stanford's STaRK benchmark (91%+ accuracy) on your own servers.

GitHub Repository: https://github.com/Papr-ai/memory-opensource
License: AGPL-3.0 - Open source, transparent by design

Overview

The self-hosted version includes the complete Predictive Memory Graph:

  • MongoDB - Metadata and document storage
  • Qdrant - Vector database for semantic search
  • Neo4j - Knowledge graph for connected context
  • Redis - Caching layer for predictive performance
  • Parse Server - User authentication with built-in ACLs
  • FastAPI - Main API server

Same API, Full Control: Identical API to Papr Cloud. Write once, run anywhere.

Prerequisites

Before you begin, ensure you have:

  • Docker (v20.10+) and Docker Compose (v2.0+)
  • 4GB+ RAM recommended for all services
  • 10GB+ disk space for databases
  • Git for cloning the repository
  • API Keys for AI services:
    • OpenAI API key (get here)
    • Groq API key (get here)
    • Deep Infra API key (get here)
    • Note: Hugging Face is also supported, and local Qwen on-device support will be added soon

1. Clone the Repository

git clone https://github.com/Papr-ai/memory-opensource.git
cd memory-opensource

2. Configure Environment

Copy the example environment file:

cp .env.example .env.opensource

Edit .env.opensource with your API keys:

# Required: AI Service API Keys
OPENAI_API_KEY=sk-your-openai-key-here
GROQ_API_KEY=gsk_your-groq-key-here
DEEPINFRA_API_KEY=your-deepinfra-key-here

# Optional: OpenAI Organization (if applicable)
OPENAI_ORGANIZATION=org-your-org-id

# Server Configuration (defaults are fine for local development)
PORT=5001
DEBUG=true
ENVIRONMENT=development

3. Start All Services

# Start with auto-initialization (recommended for first run)
docker-compose up -d

# Or with Parse Dashboard (optional UI for user management)
docker-compose --profile dashboard up -d

This command will:

  • ✅ Start MongoDB, Neo4j, Qdrant, Redis, and Parse Server
  • ✅ Initialize database schemas automatically
  • ✅ Create a default user
  • ✅ Generate an API key

4. Get Your API Key

Check the container logs to find your auto-generated API key:

docker-compose logs papr-memory | grep "API Key"

You should see output like:

papr-memory | ✅ Default API Key: mem_xxxxxxxxxxxxxxxx

5. Verify Installation

Test the health endpoint:

curl http://localhost:5001/health

You should see:

{
  "status": "healthy",
  "version": "1.0.0",
  "services": {
    "mongodb": "connected",
    "neo4j": "connected",
    "qdrant": "connected",
    "redis": "connected"
  }
}

6. Access the API

Your Papr Memory API is now running!

  • API Base URL: http://localhost:5001
  • API Documentation: http://localhost:5001/docs
  • ReDoc: http://localhost:5001/redoc
  • Parse Dashboard (if enabled): http://localhost:4040

Test Your Installation

Try adding and searching for a memory:

# Set your API key
export PAPR_API_KEY="mem_your_generated_key"

# Add a memory
curl -X POST http://localhost:5001/v1/memory \
  -H "X-API-Key: $PAPR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "content": "Test memory from self-hosted deployment",
    "type": "text",
    "metadata": {
      "external_user_id": "test_user_1",
      "tags": ["test", "deployment"]
    }
  }'

# Search for memories
curl -X POST http://localhost:5001/v1/memory/search \
  -H "X-API-Key: $PAPR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "query": "test deployment",
    "external_user_id": "test_user_1",
    "max_memories": 10
  }'

Manual Setup (Without Docker)

If you prefer to run services individually or need more control:

1. Install Python Dependencies

python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install -r requirements.txt

2. Start Individual Services

MongoDB

docker run -d -p 27017:27017 --name mongo mongo:8.0.12

Neo4j

docker run -d \
  -p 7474:7474 -p 7687:7687 \
  -e NEO4J_AUTH=neo4j/password \
  --name neo4j neo4j:2025.10.1

Qdrant

docker run -d -p 6333:6333 --name qdrant qdrant/qdrant:v1.16.0

Redis

docker run -d -p 6379:6379 --name redis redis:7-alpine

Parse Server

docker run -d -p 1337:1337 \
  -e PARSE_SERVER_APPLICATION_ID=papr-oss-app-id \
  -e PARSE_SERVER_MASTER_KEY=papr-oss-master-key \
  -e PARSE_SERVER_DATABASE_URI=mongodb://localhost:27017/papr_memory \
  --name parse parseplatform/parse-server:8.4.0

3. Configure Environment

Create .env file:

# Database URLs
MONGODB_URL=mongodb://localhost:27017/papr_memory
NEO4J_URL=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=password
QDRANT_URL=http://localhost:6333
REDIS_URL=redis://localhost:6379

# Parse Server
PARSE_SERVER_URL=http://localhost:1337
PARSE_SERVER_APP_ID=papr-oss-app-id
PARSE_SERVER_MASTER_KEY=papr-oss-master-key

# AI Services
OPENAI_API_KEY=your-openai-key
GROQ_API_KEY=your-groq-key
DEEPINFRA_API_KEY=your-deepinfra-key

# Server Config
PORT=5001
DEBUG=true
ENVIRONMENT=development

4. Run the Application

python main.py

Using SDKs with Self-Hosted

The same SDKs work with self-hosted deployments - just point to your server:

Python SDK

import os
from papr_memory import Papr

# Initialize with your self-hosted URL
client = Papr(
    x_api_key=os.environ.get("PAPR_API_KEY"),
    base_url="http://localhost:5001"  # Your self-hosted URL
)

# Use exactly the same as cloud
memory = client.memory.add(
    content="Self-hosted memory",
    type="text",
    metadata={"external_user_id": "user123"}
)

TypeScript SDK

import Papr from '@papr/memory';

const client = new Papr({
  xAPIKey: process.env.PAPR_API_KEY,
  environment: 'http://localhost:5001'  // Your self-hosted URL
});

// Same API as cloud
const memory = await client.memory.add({
  content: "Self-hosted memory",
  type: "text",
  metadata: { external_user_id: "user123" }
});

Production Deployment

For production deployments, consider these best practices:

Security

  • Use HTTPS: Set up SSL/TLS certificates (Let's Encrypt, nginx reverse proxy)
  • Secure API Keys: Use environment variables, never commit keys
  • Network Security: Use firewalls and VPC/security groups
  • Database Authentication: Enable authentication on all databases
  • Update NEO4J_AUTH: Change from default neo4j/password

Performance

  • Resource Allocation: Allocate sufficient CPU/RAM for each service
  • Redis Caching: Configure Redis with appropriate memory limits
  • Database Indexing: Ensure proper indexes on MongoDB and Neo4j
  • Connection Pooling: Configure connection pools for databases

Monitoring

  • Health Checks: Monitor /health endpoint
  • Logs: Collect logs from all services
  • Metrics: Set up Prometheus/Grafana for monitoring
  • Alerts: Configure alerts for service failures

Backups

# Backup MongoDB
docker exec mongo mongodump --out /backup

# Backup Neo4j
docker exec neo4j neo4j-admin backup --backup-dir=/backups

# Backup Qdrant
# Qdrant snapshots can be created via API
curl -X POST http://localhost:6333/snapshots

Docker Compose Configuration

For detailed Docker Compose configuration, see the Docker Configuration Guide.

Environment Variables Reference

For a complete list of environment variables, see the Configuration Reference.

Upgrading

To upgrade your self-hosted installation:

# Pull latest changes
git pull origin main

# Rebuild containers
docker-compose down
docker-compose build
docker-compose up -d

# Check logs for any migration steps
docker-compose logs -f papr-memory

Troubleshooting

Services Won't Start

Check service status:

docker-compose ps
docker-compose logs [service-name]

Connection Errors

Verify all services are running and accessible:

# Test MongoDB
docker exec mongo mongosh --eval "db.adminCommand('ping')"

# Test Neo4j
docker exec neo4j cypher-shell "RETURN 1"

# Test Qdrant
curl http://localhost:6333/health

# Test Redis
docker exec redis redis-cli ping

Memory Issues

If you encounter OOM errors, increase Docker memory:

# Edit docker-compose.yml and add memory limits
services:
  mongodb:
    mem_limit: 2g
  neo4j:
    mem_limit: 2g

API Key Not Working

Generate a new API key manually:

docker-compose exec papr-memory python -c "
from scripts.create_api_key import create_api_key
api_key = create_api_key('your-user-id')
print(f'API Key: {api_key}')
"

Open Source Community

Papr Memory is open source (AGPL-3.0) and community-driven. Join our growing community:

Why Open Source?

We believe in transparency and community ownership:

  • Audit the code - See exactly how your data is processed
  • Extend and modify - Adapt to your specific needs
  • No vendor lock-in - You own your infrastructure and data
  • Community-driven - Shape the product roadmap
  • Transparent development - All changes are public

Contributing

Want to improve Papr Memory? Contributions are welcome!

  • Bug Fixes: Submit PRs for any issues you find
  • New Features: Propose features in GitHub Discussions first
  • Documentation: Help improve these docs
  • Testing: Report compatibility issues with different setups
  • Examples: Share your integration patterns

See our Contributing Guide for:

  • Development setup instructions
  • Code style guidelines
  • Pull request process
  • Community code of conduct

License & Commercial Use

Papr Memory open source is licensed under AGPL-3.0.

What This Means

You CAN:

  • Use for personal projects
  • Use for internal company tools
  • Modify the source code
  • Deploy on your infrastructure
  • Use in production environments

⚠️ You MUST:

  • Keep the AGPL-3.0 license in any copies
  • Disclose source code if distributing modified versions
  • Share modifications if used as a network service (SaaS)
  • Provide source code to users of your network service

Restrictions:

  • Cannot use in proprietary software without disclosure
  • Cannot offer as SaaS without making source available
  • Must maintain copyright notices

Commercial SaaS Licensing

If you want to offer Papr Memory as a hosted service to customers without source code disclosure requirements:

Contact us about commercial licensing: Schedule a call

We offer flexible commercial licenses for:

  • White-label SaaS offerings
  • Embedded applications
  • OEM partnerships
  • Enterprise deployments

Understanding AGPL-3.0

The AGPL-3.0 is designed to ensure software freedom while preventing "SaaS loopholes":

  • Internal use is always free: Use it within your organization without restrictions
  • Network services require disclosure: If users interact with it over a network, you must provide source
  • Modifications must be shared: Changes must be made available under the same license

Resources:

Questions about licensing? Email legal@papr.ai

Getting Help

Migrating to Papr Cloud

As your needs grow, you can migrate to Papr Cloud for a fully managed experience. Contact our team for migration assistance.

Next Steps