Model Context Protocol (MCP)
A FastAPI-based MCP (Model Context Protocol) server implementation for integrating with Papr's memory services (https://papr.ai).
Download
Get the MCP server implementation from our GitHub repository: github.com/Papr-ai/papr_mcpserver
Prerequisites
- Python 3.10 or higher
- Get your API key: You can find it in the settings section of papr.ai. You'll need to create a developer account on Papr to get your API key.
Quick Start (Recommended)
Run the universal setup script and follow the prompts:
python setup_mcp.pyWhat this does:
- Lets you choose your application (Cursor, Claude, or Other)
- Uses the PyPI package
papr-memory-mcpwithuvfor automatic installation - Prompts for your Papr API key
- Creates/updates the MCP configuration in the correct location
- For "Other", prints the JSON you can copy into your client's config
After setup, restart your selected application.
Note: This method uses the published PyPI package
papr-memory-mcpwhich is automatically installed and managed byuv. No local code changes needed.
Alternative: Run Locally (Developer Mode)
If you prefer to run the MCP server locally from source:
- Clone and enter the repo
git clone https://github.com/Papr-ai/papr_mcpserver
cd papr_mcpserver/python-mcp- Create and activate a virtual environment (recommended)
uv venv
source .venv/bin/activate # macOS/Linux
# or on Windows PowerShell
.venv\Scripts\Activate.ps1- Install dependencies
uv pip install -e .[dev]- Set your API key
export PAPR_API_KEY=your_api_key_here # macOS/Linux
setx PAPR_API_KEY your_api_key_here # Windows (new shells)
$env:PAPR_API_KEY="your_api_key_here" # Windows PowerShell (current shell)- Start the server
uv run python paprmcp.py
# or for interactive development
fastmcp dev paprmcp.py- Point your client to the server
- Use the JSON from the README (or run
python setup_mcp.pyand pick "Other") to configure your client'smcp.json.
- Use the JSON from the README (or run
Start Server Directly
If you chose not to start the server during setup, you can start it manually:
On macOS/Linux:
# Using uv directly
source .venv/bin/activate
uv run python paprmcp.py
# For debugging run and use mcp inspector as client
source .venv/bin/activate
fastmcp dev paprmcp.pyOn Windows:
# Using uv directly
.venv\Scripts\activate
uv run python paprmcp.py
# For debugging run and use mcp inspector as client
.venv\Scripts\activate
fastmcp dev paprmcp.pyOn Windows PowerShell:
# Using uv directly
.venv\Scripts\Activate.ps1
uv run python paprmcp.py
# For debugging run and use mcp inspector as client
.venv\Scripts\Activate.ps1
fastmcp dev paprmcp.pyNote: Using the setup script with --run-server is recommended as it ensures the correct virtual environment is used and proper configuration is loaded.
Created Configuration
The setup script creates two main configuration files:
.env file in the project root:
- Contains your Papr API key
- Sets the memory server URL (default is memory.papr.ai)
MCP configuration file (location depends on your OS and chosen client):
macOS:
- Claude:
~/Library/Application Support/claude/claude_desktop_config.json - Cursor:
./cursor/mcp.json
Windows:
- Claude:
%APPDATA%/claude/claude_desktop_config.json - Cursor:
./cursor/mcp.json
Linux:
- Claude:
~/.config/claude/claude_desktop_config.json - Cursor:
./cursor/mcp.json
Development
The project uses pyproject.toml for dependency management with the following extras:
dev: Development tools (debugpy, Flask, etc.)test: Testing tools (pytest, coverage, etc.)all: All of the above
To install specific extras:
uv pip install ".[dev]" # Development dependencies
uv pip install ".[test]" # Testing dependencies
uv pip install ".[all]" # All dependenciesDebugging with VS Code
- Install debugpy:
uv pip install ".[dev]" - For MCP Inspector (optional): Install Node.js to get npx:
# On Windows (using winget)
winget install OpenJS.NodeJS
# After installation, refresh PATH in PowerShell:
$env:PATH = [System.Environment]::GetEnvironmentVariable("PATH","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("PATH","User")
# Also ensure uv is properly installed and in PATH:
$env:PATH = "C:\Users\$env:USERNAME\.local\bin;$env:PATH"
# On macOS (using Homebrew)
brew install node
# On Linux (using package manager)
# Ubuntu/Debian:
sudo apt update && sudo apt install nodejs npm
# Or using NodeSource repository for latest version:
curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -
sudo apt-get install -y nodejs- Start the server as well as mcp inspector in debug mode:
On macOS/Linux:
source .venv/bin/activate
python -m debugpy --wait-for-client --listen 5678 .venv/bin/fastmcp dev paprmcp.pyOn Windows:
.venv\Scripts\activate
python -m debugpy --wait-for-client --listen 5678 .venv\Scripts\fastmcp.exe dev paprmcp.pyOn Windows PowerShell:
.venv\Scripts\Activate.ps1
python -m debugpy --wait-for-client --listen 5678 .venv\Scripts\fastmcp.exe dev paprmcp.py- In VS Code:
- Go to Run and Debug view (Ctrl+Shift+D or Cmd+Shift+D)
- Select "Python: Attach to FastMCP"
- Click the green play button or press F5
- Set breakpoints in your code by clicking in the left margin
- The debugger will stop at breakpoints when the code is executed
- Using MCP Inspector (alternative to VS Code debugging):
- After starting the server with
fastmcp dev paprmcp.py, you can use the MCP inspector - The inspector will automatically connect to your running MCP server
- This provides a web-based interface to test and interact with your MCP tools
- After starting the server with
Troubleshooting
If you encounter any issues:
- Python command not found:
- If
python3is not found, try usingpythoninstead - Check your Python installation:
python --versionorpython3 --version - On Windows, Python 3 is often installed as
pythonrather thanpython3
- If
- Windows-specific issues:
- PowerShell execution policy: If you get execution policy errors, run:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser - Virtual environment activation: Use
.venv\Scripts\activatein Command Prompt or.venv\Scripts\Activate.ps1in PowerShell - Path issues: Ensure
uvis in your PATH. It's typically installed to%USERPROFILE%\.cargo\binor%LOCALAPPDATA%\uv\bin
- PowerShell execution policy: If you get execution policy errors, run:
- General issues:
- Check the logs for detailed error messages
- Ensure your Papr API key is correctly set in the
.envfile - Verify the virtual environment is activated
- Make sure all dependencies are installed correctly
For additional help, please contact support or open an issue in the repository.
Integrating with LangChain
You can use the Papr MCP server with LangChain to provide advanced memory capabilities for your agents:
Installation
pip install langchain langchain_openaiBasic Integration
import os
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
# Set up the MCP client for Papr Memory
class PaprMCPClient:
def __init__(self, server_url="http://localhost:8000"):
self.server_url = server_url
async def get_memory(self, query: str, max_memory_items=20):
import aiohttp
url = f"{self.server_url}/mcp_Papr_MCP_Server_get_memory"
payload = {
"query": query,
"max_memory_items": max_memory_items
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=payload) as response:
return await response.json()
async def add_memory(self, content: str, memory_type="text", metadata=None):
import aiohttp
url = f"{self.server_url}/mcp_Papr_MCP_Server_add_memory"
payload = {
"content": content,
"type": memory_type
}
if metadata:
payload["metadata"] = metadata
async with aiohttp.ClientSession() as session:
async with session.post(url, json=payload) as response:
return await response.json()
# Initialize the client
papr_client = PaprMCPClient()
@tool
async def search_memories(query: str):
"""Search for memories in Papr using MCP server."""
return await papr_client.get_memory(query=query)
@tool
async def add_memory(content: str, metadata=None):
"""Add a new memory to Papr using MCP server."""
return await papr_client.add_memory(content=content, metadata=metadata)
# Set up the LangChain agent
tools = [search_memories, add_memory]
llm = ChatOpenAI(model="gpt-4o", temperature=0).bind_tools(tools)Advanced Integration with LangGraph
For more complex agents using LangGraph, you can integrate Papr Memory like this:
from langchain_core.messages import AIMessage, HumanMessage
from langgraph.graph import StateGraph, MessagesState
from langgraph.prebuilt import ToolNode
# Set up the graph nodes
workflow = StateGraph(MessagesState)
workflow.add_node("llm", llm)
workflow.add_node("tools", ToolNode(tools))
# Add the edges (basic agent workflow)
workflow.add_edge("llm", "tools")
workflow.add_edge("tools", "llm")
workflow.add_conditional_edges(
"llm",
lambda state: "END" if should_end(state) else "tools"
)
# Compile the graph
agent = workflow.compile()
# Run the agent with memory context
async def run_with_memory(query):
# First search for relevant memories
memories = await search_memories(query)
# Build initial messages with memory context
messages = [
HumanMessage(content=f"Context from memory: {memories}\n\nUser query: {query}")
]
# Execute the agent
result = await agent.ainvoke({"messages": messages})
# Store the conversation in memory
await add_memory(
content=f"Query: {query}\nResponse: {result['messages'][-1].content}",
metadata={"conversation_type": "agent_interaction"}
)
return result['messages'][-1].contentThis integration allows your LangChain agents to seamlessly use Papr Memory for both retrieving relevant context and storing new information.