Vercel AI SDK Integration Example
Use Papr as your memory layer in a Vercel AI SDK chat workflow.
Pattern
- On user message, write to
/v1/messages. - Retrieve compressed session context and memory search results.
- Feed context into model call.
- Write assistant output back to
/v1/messages.
Prerequisites
PAPR_MEMORY_API_KEYin your environment- Vercel AI SDK chat handler with access to
sessionIdandexternalUserId
Minimal Setup
- Call
getMemoryContext()before model invocation. - Append retrieved context to model input.
- Persist assistant output after generation.
TypeScript Skeleton
import Papr from "@papr/memory";
const papr = new Papr({ xAPIKey: process.env.PAPR_MEMORY_API_KEY });
export async function getMemoryContext(sessionId: string, externalUserId: string, prompt: string) {
const compressed = await papr.messages.compressSession({ session_id: sessionId });
const memory = await papr.memory.search({
query: prompt,
external_user_id: externalUserId,
enable_agentic_graph: true,
});
return { compressed, memory };
}Validation Checklist
- Session compression returns data for active sessions.
- Memory search returns user-scoped context for prompts.
- Follow-up turns retrieve prior assistant outputs.
Troubleshooting
If context is not reused, verify sessionId and externalUserId are stable across turns.