Last updated

Model Context Protocol (MCP)

A FastAPI-based MCP (Model Context Protocol) server implementation for integrating with Papr's memory services (https://papr.ai).

Download

Get the MCP server implementation from our GitHub repository: github.com/Papr-ai/papr_mcpserver

Prerequisites

  • Python 3.10 or higher
  • Get your API key: You can find it in the settings section of papr.ai. You'll need to create a developer account on Papr to get your API key.

Run the universal setup script and follow the prompts:

python setup_mcp.py

What this does:

  • Lets you choose your application (Cursor, Claude, or Other)
  • Uses the PyPI package papr-memory-mcp with uv for automatic installation
  • Prompts for your Papr API key
  • Creates/updates the MCP configuration in the correct location
  • For "Other", prints the JSON you can copy into your client's config

After setup, restart your selected application.

Note: This method uses the published PyPI package papr-memory-mcp which is automatically installed and managed by uv. No local code changes needed.

Alternative: Run Locally (Developer Mode)

If you prefer to run the MCP server locally from source:

  1. Clone and enter the repo
git clone https://github.com/Papr-ai/papr_mcpserver
cd papr_mcpserver/python-mcp
  1. Create and activate a virtual environment (recommended)
uv venv
source .venv/bin/activate    # macOS/Linux
# or on Windows PowerShell
.venv\Scripts\Activate.ps1
  1. Install dependencies
uv pip install -e .[dev]
  1. Set your API key
export PAPR_API_KEY=your_api_key_here        # macOS/Linux
setx PAPR_API_KEY your_api_key_here          # Windows (new shells)
$env:PAPR_API_KEY="your_api_key_here"       # Windows PowerShell (current shell)
  1. Start the server
uv run python paprmcp.py
# or for interactive development
fastmcp dev paprmcp.py
  1. Point your client to the server
    • Use the JSON from the README (or run python setup_mcp.py and pick "Other") to configure your client's mcp.json.

Start Server Directly

If you chose not to start the server during setup, you can start it manually:

On macOS/Linux:

# Using uv directly
source .venv/bin/activate
uv run python paprmcp.py

# For debugging run and use mcp inspector as client
source .venv/bin/activate
fastmcp dev paprmcp.py

On Windows:

# Using uv directly
.venv\Scripts\activate
uv run python paprmcp.py

# For debugging run and use mcp inspector as client
.venv\Scripts\activate
fastmcp dev paprmcp.py

On Windows PowerShell:

# Using uv directly
.venv\Scripts\Activate.ps1
uv run python paprmcp.py

# For debugging run and use mcp inspector as client
.venv\Scripts\Activate.ps1
fastmcp dev paprmcp.py

Note: Using the setup script with --run-server is recommended as it ensures the correct virtual environment is used and proper configuration is loaded.

Created Configuration

The setup script creates two main configuration files:

.env file in the project root:

  • Contains your Papr API key
  • Sets the memory server URL (default is memory.papr.ai)

MCP configuration file (location depends on your OS and chosen client):

macOS:

  • Claude: ~/Library/Application Support/claude/claude_desktop_config.json
  • Cursor: ./cursor/mcp.json

Windows:

  • Claude: %APPDATA%/claude/claude_desktop_config.json
  • Cursor: ./cursor/mcp.json

Linux:

  • Claude: ~/.config/claude/claude_desktop_config.json
  • Cursor: ./cursor/mcp.json

Development

The project uses pyproject.toml for dependency management with the following extras:

  • dev: Development tools (debugpy, Flask, etc.)
  • test: Testing tools (pytest, coverage, etc.)
  • all: All of the above

To install specific extras:

uv pip install ".[dev]"  # Development dependencies
uv pip install ".[test]"  # Testing dependencies
uv pip install ".[all]"  # All dependencies

Debugging with VS Code

  1. Install debugpy:
uv pip install ".[dev]" 
  1. For MCP Inspector (optional): Install Node.js to get npx:
# On Windows (using winget)
winget install OpenJS.NodeJS
# After installation, refresh PATH in PowerShell:
$env:PATH = [System.Environment]::GetEnvironmentVariable("PATH","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("PATH","User")

# Also ensure uv is properly installed and in PATH:
$env:PATH = "C:\Users\$env:USERNAME\.local\bin;$env:PATH"

# On macOS (using Homebrew)
brew install node

# On Linux (using package manager)
# Ubuntu/Debian:
sudo apt update && sudo apt install nodejs npm
# Or using NodeSource repository for latest version:
curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -
sudo apt-get install -y nodejs
  1. Start the server as well as mcp inspector in debug mode:

On macOS/Linux:

source .venv/bin/activate
python -m debugpy --wait-for-client --listen 5678 .venv/bin/fastmcp dev paprmcp.py

On Windows:

.venv\Scripts\activate
python -m debugpy --wait-for-client --listen 5678 .venv\Scripts\fastmcp.exe dev paprmcp.py

On Windows PowerShell:

.venv\Scripts\Activate.ps1
python -m debugpy --wait-for-client --listen 5678 .venv\Scripts\fastmcp.exe dev paprmcp.py
  1. In VS Code:
    • Go to Run and Debug view (Ctrl+Shift+D or Cmd+Shift+D)
    • Select "Python: Attach to FastMCP"
    • Click the green play button or press F5
    • Set breakpoints in your code by clicking in the left margin
    • The debugger will stop at breakpoints when the code is executed
  2. Using MCP Inspector (alternative to VS Code debugging):
    • After starting the server with fastmcp dev paprmcp.py, you can use the MCP inspector
    • The inspector will automatically connect to your running MCP server
    • This provides a web-based interface to test and interact with your MCP tools

Troubleshooting

If you encounter any issues:

  1. Python command not found:
    • If python3 is not found, try using python instead
    • Check your Python installation: python --version or python3 --version
    • On Windows, Python 3 is often installed as python rather than python3
  2. Windows-specific issues:
    • PowerShell execution policy: If you get execution policy errors, run: Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
    • Virtual environment activation: Use .venv\Scripts\activate in Command Prompt or .venv\Scripts\Activate.ps1 in PowerShell
    • Path issues: Ensure uv is in your PATH. It's typically installed to %USERPROFILE%\.cargo\bin or %LOCALAPPDATA%\uv\bin
  3. General issues:
    • Check the logs for detailed error messages
    • Ensure your Papr API key is correctly set in the .env file
    • Verify the virtual environment is activated
    • Make sure all dependencies are installed correctly

For additional help, please contact support or open an issue in the repository.

Integrating with LangChain

You can use the Papr MCP server with LangChain to provide advanced memory capabilities for your agents:

Installation

pip install langchain langchain_openai

Basic Integration

import os
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI

# Set up the MCP client for Papr Memory
class PaprMCPClient:
    def __init__(self, server_url="http://localhost:8000"):
        self.server_url = server_url
    
    async def get_memory(self, query: str, max_memory_items=20):
        import aiohttp
        
        url = f"{self.server_url}/mcp_Papr_MCP_Server_get_memory"
        payload = {
            "query": query,
            "max_memory_items": max_memory_items
        }
        
        async with aiohttp.ClientSession() as session:
            async with session.post(url, json=payload) as response:
                return await response.json()
    
    async def add_memory(self, content: str, memory_type="text", metadata=None):
        import aiohttp
        
        url = f"{self.server_url}/mcp_Papr_MCP_Server_add_memory"
        payload = {
            "content": content,
            "type": memory_type
        }
        if metadata:
            payload["metadata"] = metadata
        
        async with aiohttp.ClientSession() as session:
            async with session.post(url, json=payload) as response:
                return await response.json()

# Initialize the client
papr_client = PaprMCPClient()

@tool
async def search_memories(query: str):
    """Search for memories in Papr using MCP server."""
    return await papr_client.get_memory(query=query)

@tool
async def add_memory(content: str, metadata=None):
    """Add a new memory to Papr using MCP server."""
    return await papr_client.add_memory(content=content, metadata=metadata)

# Set up the LangChain agent
tools = [search_memories, add_memory]
llm = ChatOpenAI(model="gpt-4o", temperature=0).bind_tools(tools)

Advanced Integration with LangGraph

For more complex agents using LangGraph, you can integrate Papr Memory like this:

from langchain_core.messages import AIMessage, HumanMessage
from langgraph.graph import StateGraph, MessagesState
from langgraph.prebuilt import ToolNode

# Set up the graph nodes
workflow = StateGraph(MessagesState)
workflow.add_node("llm", llm)
workflow.add_node("tools", ToolNode(tools))

# Add the edges (basic agent workflow)
workflow.add_edge("llm", "tools")
workflow.add_edge("tools", "llm")
workflow.add_conditional_edges(
    "llm",
    lambda state: "END" if should_end(state) else "tools"
)

# Compile the graph
agent = workflow.compile()

# Run the agent with memory context
async def run_with_memory(query):
    # First search for relevant memories
    memories = await search_memories(query)
    
    # Build initial messages with memory context
    messages = [
        HumanMessage(content=f"Context from memory: {memories}\n\nUser query: {query}")
    ]
    
    # Execute the agent
    result = await agent.ainvoke({"messages": messages})
    
    # Store the conversation in memory
    await add_memory(
        content=f"Query: {query}\nResponse: {result['messages'][-1].content}",
        metadata={"conversation_type": "agent_interaction"}
    )
    
    return result['messages'][-1].content

This integration allows your LangChain agents to seamlessly use Papr Memory for both retrieving relevant context and storing new information.