LLM Tool Schema
This documentation provides the schema definitions for integrating Papr memory operations as tool calls in LLM agents. The schemas are defined using Zod and match our official API specification.
Memory Operations
Adding Memories
The add_memory tool allows LLMs to store new memories in the Papr system.
Schema Definition
const addMemorySchema = z.object({
content: z.string().describe("The content of the memory item you want to add to memory"),
type: z.enum(['text', 'code_snippet', 'document'])
.default('text')
.describe("Content type of the memory item"),
metadata: z.object({
topics: z.string().describe("List of topics related to the content, context & conversation history"),
hierarchicalStructures: z.string().describe("Hierarchical structures to enable users to navigate from broad topics to more specific ones"),
emojiTags: z.string().describe("Emoji tags related to the memory item and conversation history"),
emotionTags: z.string().describe("Emotion tags related to the memory item and conversation history"),
}).strict().describe("Metadata for the memory item"), // Required
context: z.array(
z.object({
role: z.enum(['user', 'assistant']).describe("Person who created content for a context list"),
content: z.string().describe("Content of the previous message in conversation history"),
})
).describe("Context for the memory item"),
relationships_json: z.array(
z.object({
related_item_id: z.enum(['TextMemoryItem', 'previous_memory_item_id']).describe("Relationship used in Graph DB"),
relation_type: z.string().describe("Defines the neo4j relationship type"),
related_item_type: z.enum(['TextMemoryItem']).describe("Memory item type inside context to relate"),
metadata: z.object({
key1: z.string().describe("Description for required key1."),
key2: z.string().describe("Description for required key2."),
}).strict().describe("Metadata for the relation"),
}).strict()
).optional().describe("Relationships defining connections between content and context"),
}).required({
content: true,
type: true,
metadata: true,
context: true,
relationships_json: true,
});
Tool Implementation
const add_memory = tool({
description: "Add a new memory item to the Papr system with size validation and background processing.",
parameters: addMemorySchema,
execute: async (args) => {
// Implementation details...
}
});
Retrieving Memories
The search tool enables LLMs to search and retrieve memories from the Papr system.
Schema Definition
const searchSchema = z.object({
query: z.string()
.describe("Detailed search query describing what you're looking for. For best results, write 2-3 sentences that include specific details, context, and time frame."),
rank_results: z.boolean()
.default(false)
.describe("Whether to enable additional ranking of search results. Default is false because results are already ranked when using an LLM for search.")
});
Tool Implementation
const search = tool({
description: "Search through memories with authentication required.",
parameters: searchSchema,
execute: async (args) => {
// Implementation details...
}
});
Key Features
- Schema Validation: Uses Zod for robust type checking and validation that matches our API spec
- Timeout Management: Handles timeouts appropriately with retries
- Error Handling: Comprehensive error handling with proper status codes
- Data Processing: Includes memory data processing and validation
- Graph Support: Support for Neo4j graph relationships
- Response Limiting: Automatically limits results based on API constraints
- Security: Proper authentication handling through API keys or tokens
Best Practices
- Metadata Population: Use LLMs to intelligently fill metadata fields based on content
- Query Formation: Leverage LLMs to create detailed, contextual queries
- Error Handling: Always implement proper error handling and fallbacks
- Authorization: Ensure proper session token management
- Timeout Handling: Consider implementing retry logic for timeout scenarios
Example Usage
Here's a complete example of using both tools in an LLM application:
// Adding a memory
const memoryResult = await add_memory.execute({
content: "Meeting notes from the product planning session",
type: "text",
metadata: {
topics: "product, planning",
hierarchical_structures: "Business/Planning/Product",
"emoji tags": "📊,💡,📝",
"emotion tags": "focused, productive",
createdAt: "2024-03-21T10:00:00Z",
location: "Conference Room A",
sourceUrl: "https://meeting-notes.example.com/123",
conversationId: "conv-123"
},
context: [
{ role: "user", content: "Let's discuss the Q2 product roadmap" },
{ role: "assistant", content: "I'll help you plan the roadmap. What are your key objectives?" }
],
relationships_json: [{
related_item_id: "previous_memory_item_id",
relation_type: "follows",
related_item_type: "TextMemoryItem",
metadata: {
relevance: "high"
}
}]
});
// Searching memories
const searchResults = await search.execute({
query: "Find recurring customer complaints about API performance from the last month. Focus on issues where customers specifically mentioned timeout errors or slow response times in their conversations.",
rank_results: true
});
LangChain Integration
You can use the Papr Memory schema with LangChain to create agentic tools that manage memory. Here's how to integrate these schemas with LangChain:
Python Implementation
import os
import json
from typing import Dict, List, Optional, Any
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
# Initialize API key from environment
PAPR_API_KEY = os.environ.get("PAPR_MEMORY_API_KEY")
@tool
async def papr_add_memory(
content: str,
type: str = "text",
metadata: Optional[Dict[str, Any]] = None,
context: Optional[List[Dict[str, str]]] = None
):
"""Add a new memory item to Papr with size validation and background processing.
Args:
content: The content of the memory item you want to add
type: Content type - one of 'text', 'code_snippet', 'document'
metadata: Additional metadata like topics, tags, etc.
context: List of previous conversation messages with role and content
Returns:
dict: Response containing the added memory details
"""
import aiohttp
url = "https://memory.papr.ai/v1/memory"
headers = {
"X-API-Key": PAPR_API_KEY,
"Content-Type": "application/json",
"X-Client-Type": "langchain"
}
# Build the request payload
payload = {
"content": content,
"type": type
}
if metadata:
payload["metadata"] = metadata
if context:
payload["context"] = context
async with aiohttp.ClientSession() as session:
async with session.post(url, headers=headers, json=payload) as response:
response_data = await response.json()
return response_data
@tool
async def papr_search_memory(
query: str,
user_id: Optional[str] = None,
rank_results: bool = False
):
"""Search through Papr memories with a natural language query.
Args:
query: Detailed search query. Write 2-3 sentences with specific details for best results.
user_id: Optional user ID to filter search results
rank_results: Whether to enable additional ranking of search results
Returns:
dict: Search results containing matching memories
"""
import aiohttp
url = "https://memory.papr.ai/v1/memory/search"
headers = {
"X-API-Key": PAPR_API_KEY,
"Content-Type": "application/json",
"Accept-Encoding": "gzip"
}
payload = {
"query": query,
"rank_results": rank_results
}
if user_id:
payload["user_id"] = user_id
async with aiohttp.ClientSession() as session:
async with session.post(url, headers=headers, json=payload) as response:
response_data = await response.json()
return response_data
# Add to LangChain agent
tools = [papr_add_memory, papr_search_memory]
llm = ChatOpenAI(model="gpt-4o", temperature=0).bind_tools(tools)
Example with LangGraph
For a complete integration with LangGraph:
from langchain_core.messages import HumanMessage
from langgraph.graph import StateGraph, MessagesState
from langgraph.prebuilt import ToolNode
# Build the graph
workflow = StateGraph(MessagesState)
workflow.add_node("llm", llm)
workflow.add_node("tools", ToolNode(tools))
# Add the edges
workflow.add_edge("llm", "tools")
workflow.add_edge("tools", "llm")
workflow.add_conditional_edges(
"llm",
lambda state: "END" if should_end(state) else "tools"
)
# Compile the graph
agent = workflow.compile()
# Run the agent
async def run_with_memory_context(query):
# First, search for relevant memories
memory_results = await papr_search_memory(query=query)
# Format the retrieved memories as context
memory_context = ""
if memory_results.get("code") == 200 and memory_results.get("data", {}).get("memories"):
memories = memory_results["data"]["memories"]
memory_context = "Relevant information from memory:\n"
for i, memory in enumerate(memories[:3], 1): # Top 3 memories for context
memory_context += f"{i}. {memory['content']}\n"
# Create the input with memory context
input_with_context = f"{memory_context}\n\nUser query: {query}"
# Run the agent
result = await agent.ainvoke({"messages": [HumanMessage(content=input_with_context)]})
# Store the conversation in memory
await papr_add_memory(
content=f"User question: {query}\nAgent response: {result['messages'][-1].content}",
metadata={"source": "agent_conversation", "stored_by": "langchain_agent"}
)
return result["messages"][-1].content
This integration demonstrates:
- How to convert Papr Memory schemas into LangChain tools
- Building a complete agent with memory retrieval and storage
- Using memory context to enhance the agent's responses
By using these patterns, you can create more intelligent agents that remember past interactions and use relevant context for each conversation.