Messages API Basics
Learn the core chat-memory workflow with Papr Messages API: store messages, retrieve session history, compress context, and process messages into long-term memory.
What You Will Build
- A session-based chat memory pipeline
- Compressed context generation for LLM prompts
- Message-to-memory processing for long-term recall
Prerequisites
- Papr API key
- Basic REST API familiarity
- Node.js or Python runtime
Step 1: Store Messages
Use a stable sessionId for each conversation.
curl -X POST https://memory.papr.ai/v1/messages \
-H "X-API-Key: $PAPR_MEMORY_API_KEY" \
-H "Content-Type: application/json" \
-H "X-Client-Type: curl" \
-d '{
"sessionId": "support_session_001",
"role": "user",
"content": "I prefer email alerts and weekly digest summaries.",
"external_user_id": "user_123",
"process_messages": true
}'Step 2: Retrieve Session History
curl -X GET "https://memory.papr.ai/v1/messages/sessions/support_session_001?limit=50&skip=0" \
-H "X-API-Key: $PAPR_MEMORY_API_KEY" \
-H "X-Client-Type: curl"Look for:
messagestotal_countsummariescontext_for_llm
Step 3: Get Compressed Context
curl -X GET "https://memory.papr.ai/v1/messages/sessions/support_session_001/compress" \
-H "X-API-Key: $PAPR_MEMORY_API_KEY" \
-H "X-Client-Type: curl"Use context_for_llm directly in your model system/context prompt.
Step 4: Search Long-Term Memory
curl -X POST "https://memory.papr.ai/v1/memory/search?max_memories=20&max_nodes=15&response_format=toon" \
-H "X-API-Key: $PAPR_MEMORY_API_KEY" \
-H "Content-Type: application/json" \
-H "X-Client-Type: curl" \
-d '{
"query": "What communication preferences has this user shared?",
"external_user_id": "user_123",
"enable_agentic_graph": true
}'Python Example
import os
from papr_memory import Papr
client = Papr(x_api_key=os.environ.get("PAPR_MEMORY_API_KEY"))
client.messages.store(
sessionId="support_session_001",
role="user",
content="Please send me updates by email, weekly.",
external_user_id="user_123",
process_messages=True,
)
history = client.messages.get_history(session_id="support_session_001", limit=50)
compressed = client.messages.compress_session(session_id="support_session_001")
results = client.memory.search(
query="What are this user's notification preferences?",
external_user_id="user_123",
enable_agentic_graph=True,
)Production Notes
- Keep
process_messages=trueunless you explicitly want storage-only behavior. - Use one
sessionIdper conversation thread. - For long threads, use compressed context to control token usage.
- Use
response_format=toonwhen memory search output feeds directly into LLM prompts.