Chat Memory with OpenAI Integration
This tutorial demonstrates how to build a chat application that stores conversation history in Papr Memory and uses it to enhance OpenAI responses.
Production-Ready Reference Implementation
If you're looking for a complete, production-ready implementation of a chat application with memory capabilities, check out our open-source PaprChat project. PaprChat is built on Next.js 14 and demonstrates all the concepts covered in this tutorial in a full-featured application.
Key features of PaprChat:
- Long-term memory for personalized chat experiences powered by Papr
- Automatic storage and retrieval of relevant conversations
- RAG (Retrieval-Augmented Generation) capabilities
- Semantic search across conversation history
- Persistent memory storage in the cloud
- Built with Next.js 14 App Router
- AI SDK for unified LLM interactions
- Beautiful UI with shadcn/ui and Tailwind CSS
- Component primitives from Radix UI
- Production-ready with Neon Serverless Postgres, Vercel Blob storage, and Auth.js
Model Support: PaprChat ships with xAI grok-2-1212
as the default chat model, but supports multiple providers through the AI SDK including OpenAI, Anthropic, Cohere, and many more.
Deployment: You can deploy your own instance of PaprChat with one click using the deploy button in the repository. The deployment process will set up:
- Your GitHub repository clone
- A new Vercel project
- Neon PostgreSQL database provisioning
- Vercel Blob Storage configuration
- Required environment variables setup
For detailed setup instructions and customization options, visit our deployment guide.
Prerequisites
Before you begin, you'll need:
- A Papr Memory API key
- An OpenAI API key
- Node.js installed
Implementation
1. Project Setup
Create a new project and install dependencies:
mkdir chat-memory
cd chat-memory
npm init -y
npm install express dotenv node-fetch openai
npm install @papr/memory
Create a .env
file:
PAPR_MEMORY_API_KEY=your_papr_api_key_here
OPENAI_API_KEY=your_openai_api_key_here
2. Creating the Chat Application
Create app.js
:
import { OpenAI } from 'openai';
import Papr from '@papr/memory';
import express from 'express';
import dotenv from 'dotenv';
dotenv.config();
const app = express();
app.use(express.json());
// Initialize OpenAI and Papr client
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
const paprClient = new Papr({
apiKey: process.env.PAPR_MEMORY_API_KEY
});
// Save chat message to Papr Memory
async function saveToMemory(message, role, conversationId) {
try {
const response = await paprClient.memory.add({
content: message,
type: 'text',
metadata: {
role,
conversationId,
createdAt: new Date().toISOString(),
external_user_id: "user_123", // Using external user ID for automatic user creation
emoji_tags: "💬,💭",
emotion_tags: role === 'user' ? "inquiring" : "helpful",
external_user_read_access: ["user_123"]
}
});
return response;
} catch (error) {
throw new Error(`Failed to save message: ${error.message}`);
}
}
// Search conversation history
async function searchMemory(query, conversationId) {
try {
const searchResponse = await paprClient.memory.search({
query: `Find our recent conversation history for conversation ${conversationId}, especially messages related to: "${query}". Include both the user's questions and the assistant's responses to provide full context.`,
external_user_id: "user_123", // Using external user ID for filtering
enable_agentic_graph: true,
rank_results: false,
max_memories: 20,
metadata: {
conversationId: conversationId
}
});
return searchResponse;
} catch (error) {
throw new Error(`Failed to search memory: ${error.message}`);
}
}
// Process a chat message
app.post('/chat', async (req, res) => {
try {
const { message, conversationId } = req.body;
if (!message || !conversationId) {
return res.status(400).json({ error: 'Missing required fields' });
}
// Save user message to memory
await saveToMemory(message, 'user', conversationId);
// Search for relevant conversation history
const memoryResults = await searchMemory(message, conversationId);
// Extract relevant context from memory
let context = "";
if (memoryResults.data && memoryResults.data.memories) {
context = memoryResults.data.memories
.map(mem => {
const role = mem.metadata?.role || 'unknown';
return `${role}: ${mem.content}`;
})
.join('\n');
}
// Create messages for OpenAI, including context if available
const messages = [
{ role: "system", content: "You are a helpful assistant. Use the conversation history provided to give contextual responses." }
];
if (context) {
messages.push({
role: "system",
content: `Here is the relevant conversation history:\n${context}`
});
}
messages.push({ role: "user", content: message });
// Get response from OpenAI
const completion = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: messages,
});
const aiResponse = completion.choices[0].message.content;
// Save assistant response to memory
await saveToMemory(aiResponse, 'assistant', conversationId);
// Send response to client
res.json({
message: aiResponse,
conversationId
});
} catch (error) {
console.error('Error processing chat:', error);
res.status(500).json({ error: error.message });
}
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
3. Update package.json
Update your package.json to include:
{
"name": "chat-memory",
"version": "1.0.0",
"description": "Chat app with Papr Memory and OpenAI",
"main": "app.js",
"type": "module",
"scripts": {
"start": "node app.js"
},
"dependencies": {
"@papr/memory": "^1.0.0",
"dotenv": "^16.3.1",
"express": "^4.18.2",
"openai": "^4.0.0"
}
}
Usage
- Start the server:
npm start
- Send a message:
curl -X POST http://localhost:3000/chat \
-H "Content-Type: application/json" \
-d '{
"message": "What are the benefits of using a vector database?",
"conversationId": "conv123"
}'
- Continue the conversation with follow-up questions:
curl -X POST http://localhost:3000/chat \
-H "Content-Type: application/json" \
-d '{
"message": "How does that compare to traditional databases?",
"conversationId": "conv123"
}'
How It Works
User Message Flow:
- User sends a message
- Message is saved to Papr Memory
- App searches for relevant conversation history
- OpenAI receives the user message + context from memory
- OpenAI generates a response
- Response is saved to Papr Memory
- Response is sent back to the user
Memory Integration:
- Each message is stored with metadata (role, conversation ID, timestamp)
- Search queries include detailed context about what to look for
- Retrieved memories provide conversation context to OpenAI
Benefits:
- OpenAI has access to past conversations
- Users can reference previous questions
- The assistant maintains context across multiple interactions
Comparing with PaprChat
While this tutorial provides a simple implementation to understand the core concepts, PaprChat offers a more sophisticated approach:
Feature | This Tutorial | PaprChat |
---|---|---|
UI | None (API only) | Modern Next.js UI with shadcn/ui and Radix UI |
Authentication | None | Auth.js integration |
Database | None | Neon Serverless Postgres |
File Handling | None | Vercel Blob Storage |
AI Providers | OpenAI only | Multiple providers (xAI, OpenAI, Anthropic, Cohere) |
Memory | Basic implementation | Advanced with RAG capabilities |
Deployment | Manual | One-click Vercel deployment |
Storage | In-memory | Persistent cloud storage |
Graph Support | Limited | Full agentic graph capabilities |
Next Steps
- Deploy PaprChat with one click for a complete solution
- Customize the chat interface in the
components/
directory - Configure your preferred AI provider settings in
config/ai.ts
- Explore the API references in our developer documentation
- Join our Discord community for support and discussions