Chat Memory with OpenAI Integration
This tutorial demonstrates how to build a chat application that stores conversation history in Papr Memory and uses it to enhance OpenAI responses.
Prerequisites
Before you begin, you'll need:
- A Papr Memory API key
- An OpenAI API key
- Node.js installed
Implementation
1. Project Setup
Create a new project and install dependencies:
mkdir chat-memory
cd chat-memory
npm init -y
npm install express dotenv node-fetch openai
Create a .env
file:
PAPR_API_KEY=your_papr_api_key_here
OPENAI_API_KEY=your_openai_api_key_here
2. Creating the Chat Application
Create app.js
:
import { OpenAI } from 'openai';
import { Papr } from '@papr/sdk';
import express from 'express';
import dotenv from 'dotenv';
dotenv.config();
const app = express();
app.use(express.json());
// Initialize OpenAI and Papr client
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY
});
const paprClient = new Papr({
apiKey: process.env.PAPR_API_KEY
});
// Save chat message to Papr Memory
async function saveToMemory(message, role, conversationId) {
try {
const response = await paprClient.memory.add({
content: message,
type: 'text',
metadata: {
role,
conversationId,
timestamp: new Date().toISOString()
}
});
return response;
} catch (error) {
throw new Error(`Failed to save message: ${error.message}`);
}
}
// Search conversation history
async function searchMemory(query, conversationId) {
try {
const searchResponse = await paprClient.memory.search({
query: `Find our recent conversation history for conversation ${conversationId}, especially messages related to: "${query}". Include both the user's questions and the assistant's responses to provide full context.`
});
return searchResponse;
} catch (error) {
throw new Error(`Failed to search memory: ${error.message}`);
}
}
// Process a chat message
app.post('/chat', async (req, res) => {
try {
const { message, conversationId } = req.body;
if (!message || !conversationId) {
return res.status(400).json({ error: 'Missing required fields' });
}
// Save user message to memory
await saveToMemory(message, 'user', conversationId);
// Search for relevant conversation history
const memoryResults = await searchMemory(message, conversationId);
// Extract relevant context from memory
let context = "";
if (memoryResults.data && memoryResults.data.memories) {
context = memoryResults.data.memories
.map(mem => {
const role = mem.metadata?.role || 'unknown';
return `${role}: ${mem.content}`;
})
.join('\n');
}
// Create messages for OpenAI, including context if available
const messages = [
{ role: "system", content: "You are a helpful assistant. Use the conversation history provided to give contextual responses." }
];
if (context) {
messages.push({
role: "system",
content: `Here is the relevant conversation history:\n${context}`
});
}
messages.push({ role: "user", content: message });
// Get response from OpenAI
const completion = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: messages,
});
const aiResponse = completion.choices[0].message.content;
// Save assistant response to memory
await saveToMemory(aiResponse, 'assistant', conversationId);
// Send response to client
res.json({
message: aiResponse,
conversationId
});
} catch (error) {
console.error('Error processing chat:', error);
res.status(500).json({ error: error.message });
}
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
3. Update package.json
Update your package.json to include:
{
"name": "chat-memory",
"version": "1.0.0",
"description": "Chat app with Papr Memory and OpenAI",
"main": "app.js",
"type": "module",
"scripts": {
"start": "node app.js"
},
"dependencies": {
"@papr/sdk": "^1.0.0",
"dotenv": "^16.3.1",
"express": "^4.18.2",
"openai": "^4.0.0"
}
}
Usage
- Start the server:
npm start
- Send a message:
curl -X POST http://localhost:3000/chat \
-H "Content-Type: application/json" \
-d '{
"message": "What are the benefits of using a vector database?",
"conversationId": "conv123"
}'
- Continue the conversation with follow-up questions:
curl -X POST http://localhost:3000/chat \
-H "Content-Type: application/json" \
-d '{
"message": "How does that compare to traditional databases?",
"conversationId": "conv123"
}'
How It Works
User Message Flow:
- User sends a message
- Message is saved to Papr Memory
- App searches for relevant conversation history
- OpenAI receives the user message + context from memory
- OpenAI generates a response
- Response is saved to Papr Memory
- Response is sent back to the user
Memory Integration:
- Each message is stored with metadata (role, conversation ID, timestamp)
- Search queries include detailed context about what to look for
- Retrieved memories provide conversation context to OpenAI
Benefits:
- OpenAI has access to past conversations
- Users can reference previous questions
- The assistant maintains context across multiple interactions
Next Steps
- Add authentication
- Implement conversation management UI
- Add support for document references
- Enhance search with metadata filtering