Last updated

Vercel AI SDK Integration Example

Use Papr as your memory layer in a Vercel AI SDK chat workflow.

Pattern

  1. On user message, write to /v1/messages.
  2. Retrieve compressed session context and memory search results.
  3. Feed context into model call.
  4. Write assistant output back to /v1/messages.

Prerequisites

  • PAPR_MEMORY_API_KEY in your environment
  • Vercel AI SDK chat handler with access to sessionId and externalUserId

Minimal Setup

  1. Call getMemoryContext() before model invocation.
  2. Append retrieved context to model input.
  3. Persist assistant output after generation.

TypeScript Skeleton

import Papr from "@papr/memory";

const papr = new Papr({ xAPIKey: process.env.PAPR_MEMORY_API_KEY });

export async function getMemoryContext(sessionId: string, externalUserId: string, prompt: string) {
  const compressed = await papr.messages.compressSession({ session_id: sessionId });
  const memory = await papr.memory.search({
    query: prompt,
    external_user_id: externalUserId,
    enable_agentic_graph: true,
  });
  return { compressed, memory };
}

Validation Checklist

  • Session compression returns data for active sessions.
  • Memory search returns user-scoped context for prompts.
  • Follow-up turns retrieve prior assistant outputs.

Troubleshooting

If context is not reused, verify sessionId and externalUserId are stable across turns.