Quickstart: Chat Memory
Build persistent chat memory using sessioned messages, compression, and memory search.
What You Will Build
- Store chat messages by session
- Retrieve compressed conversation context for LLM prompts
- Search long-term memory with graph-aware retrieval
Prerequisites
PAPR_MEMORY_API_KEYconfigured in your environment- A stable
sessionIdandexternal_user_idper conversation
Minimal Setup
- Store one chat message with
process_messages=true. - Retrieve compressed context for the same session.
- Run a memory search before response generation.
1) Store a Message
curl -X POST https://memory.papr.ai/v1/messages \
-H "X-API-Key: $PAPR_MEMORY_API_KEY" \
-H "Content-Type: application/json" \
-H "X-Client-Type: curl" \
-d '{
"sessionId": "session_support_001",
"role": "user",
"content": "I prefer email notifications and weekly summaries.",
"external_user_id": "user_123",
"process_messages": true
}'2) Retrieve Session Context
curl -X GET "https://memory.papr.ai/v1/messages/sessions/session_support_001/compress" \
-H "X-API-Key: $PAPR_MEMORY_API_KEY" \
-H "X-Client-Type: curl"Use context_for_llm directly in your model prompt.
3) Search Memory Before Responding
curl -X POST "https://memory.papr.ai/v1/memory/search?max_memories=20&max_nodes=15" \
-H "X-API-Key: $PAPR_MEMORY_API_KEY" \
-H "Content-Type: application/json" \
-H "X-Client-Type: curl" \
-d '{
"query": "What communication preferences does this user have?",
"external_user_id": "user_123",
"enable_agentic_graph": true
}'Recommended Defaults
process_messages: trueenable_agentic_graph: truemax_memories: 15-20max_nodes: 10-15
Validation Checklist
- The stored message is visible in session history.
- Compression endpoint returns session summary fields.
- Search returns user preference context from prior turns.
Troubleshooting
If context is missing, confirm the same sessionId is used across writes and reads and review Messages Management.