Model Context Protocol (MCP)
A FastAPI-based MCP (Model Context Protocol) server implementation for integrating with Papr's memory services (https://papr.ai). The MCP server enables advanced context processing and memory management capabilities through a standardized protocol.
Download
Get the MCP server implementation from our GitHub repository: github.com/Papr-ai/papr_mcpserver/tree/main/python-mcp
Prerequisites
- Python 3.10 or higher
- Get your API key: You can find it in the settings section of papr.ai. You'll need to create an account first and quickly go through our web app onboarding.
Quick Start
- Clone this repository for Python MCP Papr Server:
git clone https://github.com/Papr-ai/papr_mcpserver
cd python-mcp
- Run the setup script:
python3 setup_run_mcp.py
The setup script will guide you through the following steps:
Dependencies Installation
- Installs uv if not already present
- Creates a virtual environment (default '.venv')
- Installs all required project dependencies
API Key Configuration
- Prompts for your Papr API key
- Validates the key format
- Stores it securely in .env file
MCP Client Selection
Choose your preferred client:
- Claude
- Cursor AI
- Other
Client Configuration
- For Claude: Automatically configures
claude_desktop_config.json
- For Cursor AI: Automatically configures
./cursor/mcp.json
- For Other clients: Displays the configuration JSON in console (you'll need to manually copy this to your client's configuration)
Note: If you select "Other" as your client, the script will print the MCP configuration JSON to the console. You'll need to manually copy this configuration to your client's appropriate configuration file.
Setup Options
You can run the setup script with different options:
# Full setup with all prompts
python3 setup_run_mcp.py
# Skip dependency installation
python3 setup_run_mcp.py --skip-deps
# Skip setup and run server
python3 setup_run_mcp.py --run-server
Start Server Directly
If you chose not to start the server during setup, you can start it manually:
# Using uv directly
source .venv/bin/activate
uv run python paprmcp.py
# For debugging run and use mcp inspector as client
source .venv/bin/activate
fastmcp dev paprmcp.py
Note: Using the setup script with --run-server
is recommended as it ensures the correct virtual environment is used and proper configuration is loaded.
Created Configuration
The setup script creates two main configuration files:
.env file in the project root:
- Contains your Papr API key
- Sets the memory server URL (default is memory.papr.ai)
MCP configuration file (location depends on your OS and chosen client):
macOS:
- Claude:
~/Library/Application Support/claude/claude_desktop_config.json
- Cursor:
./cursor/mcp.json
Windows:
- Claude:
%APPDATA%/claude/claude_desktop_config.json
- Cursor:
./cursor/mcp.json
Linux:
- Claude:
~/.config/claude/claude_desktop_config.json
- Cursor:
./cursor/mcp.json
Development
The project uses pyproject.toml
for dependency management with the following extras:
dev
: Development tools (debugpy, Flask, etc.)test
: Testing tools (pytest, coverage, etc.)all
: All of the above
To install specific extras:
uv pip install ".[dev]" # Development dependencies
uv pip install ".[test]" # Testing dependencies
uv pip install ".[all]" # All dependencies
Debugging with VS Code
- Install debugpy:
uv pip install ".[dev]"
- Start the server as well as mcp inspector in debug mode:
source .venv/bin/activate
python -m debugpy --wait-for-client --listen 5678 .venv/bin/fastmcp dev paprmcp.py
- In VS Code:
- Go to Run and Debug view (Ctrl+Shift+D or Cmd+Shift+D)
- Select "Python: Attach to FastMCP"
- Click the green play button or press F5
- Set breakpoints in your code by clicking in the left margin
- The debugger will stop at breakpoints when the code is executed
Troubleshooting
If you encounter any issues:
- Check the logs for detailed error messages
- Ensure your Papr API key is correctly set in the .env file
- Verify the virtual environment is activated
- Make sure all dependencies are installed correctly
For additional help, please contact support or open an issue in the repository.
Integrating with LangChain
You can use the Papr MCP server with LangChain to provide advanced memory capabilities for your agents:
Installation
pip install langchain langchain_openai
Basic Integration
import os
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
# Set up the MCP client for Papr Memory
class PaprMCPClient:
def __init__(self, server_url="http://localhost:8000"):
self.server_url = server_url
async def get_memory(self, query: str, max_memory_items=20):
import aiohttp
url = f"{self.server_url}/mcp_Papr_MCP_Server_get_memory"
payload = {
"query": query,
"max_memory_items": max_memory_items
}
async with aiohttp.ClientSession() as session:
async with session.post(url, json=payload) as response:
return await response.json()
async def add_memory(self, content: str, memory_type="text", metadata=None):
import aiohttp
url = f"{self.server_url}/mcp_Papr_MCP_Server_add_memory"
payload = {
"content": content,
"type": memory_type
}
if metadata:
payload["metadata"] = metadata
async with aiohttp.ClientSession() as session:
async with session.post(url, json=payload) as response:
return await response.json()
# Initialize the client
papr_client = PaprMCPClient()
@tool
async def search_memories(query: str):
"""Search for memories in Papr using MCP server."""
return await papr_client.get_memory(query=query)
@tool
async def add_memory(content: str, metadata=None):
"""Add a new memory to Papr using MCP server."""
return await papr_client.add_memory(content=content, metadata=metadata)
# Set up the LangChain agent
tools = [search_memories, add_memory]
llm = ChatOpenAI(model="gpt-4o", temperature=0).bind_tools(tools)
Advanced Integration with LangGraph
For more complex agents using LangGraph, you can integrate Papr Memory like this:
from langchain_core.messages import AIMessage, HumanMessage
from langgraph.graph import StateGraph, MessagesState
from langgraph.prebuilt import ToolNode
# Set up the graph nodes
workflow = StateGraph(MessagesState)
workflow.add_node("llm", llm)
workflow.add_node("tools", ToolNode(tools))
# Add the edges (basic agent workflow)
workflow.add_edge("llm", "tools")
workflow.add_edge("tools", "llm")
workflow.add_conditional_edges(
"llm",
lambda state: "END" if should_end(state) else "tools"
)
# Compile the graph
agent = workflow.compile()
# Run the agent with memory context
async def run_with_memory(query):
# First search for relevant memories
memories = await search_memories(query)
# Build initial messages with memory context
messages = [
HumanMessage(content=f"Context from memory: {memories}\n\nUser query: {query}")
]
# Execute the agent
result = await agent.ainvoke({"messages": messages})
# Store the conversation in memory
await add_memory(
content=f"Query: {query}\nResponse: {result['messages'][-1].content}",
metadata={"conversation_type": "agent_interaction"}
)
return result['messages'][-1].content
This integration allows your LangChain agents to seamlessly use Papr Memory for both retrieving relevant context and storing new information.
Troubleshooting
If you encounter any issues:
- Check the logs for detailed error messages
- Ensure your Papr API key is correctly set in the .env file
- Verify the virtual environment is activated
- Make sure all dependencies are installed correctly
For additional help, please contact support or open an issue in the repository.