Papr Products
Four products that work standalone or together to turn your data into intelligence.
Graph-Aware Vector Search
Domain-tuned search that understands context beyond semantic similarity.
Traditional vector search treats everything the same. Graph-aware search encodes structured dimensions—programming language, temporal context, domain expertise—directly into the embedding space.
When to use:
- Code search (filter by language, API patterns)
- Scientific claims (filter by methodology, evidence type)
- Custom domains (legal, medical, financial)
- Transform embeddings from any provider (OpenAI, Cohere, etc.)
- Improve existing search results with domain-aware reranking
Two integration approaches:
Approach 1: Papr Storage (Full Integration)
Use Papr's memory storage with graph-aware embeddings enabled.
When to use:
- Building new applications from scratch
- Want Papr to handle storage and search
- Need full memory management features
Endpoints:
POST /v1/memorywithenable_holographic=truePOST /v1/memory/searchwithholographic_config
# Store with graph-aware embeddings
papr.memory.add(
content="def sort_list(items): return sorted(items)",
enable_holographic=True,
frequency_schema_id="cosqa", # Built-in schema
external_user_id="user_123"
)
# Search with domain filtering
results = papr.memory.search(
query="How do I sort a list?",
external_user_id="user_123",
holographic_config={
"enabled": True,
"frequency_schema_id": "cosqa",
"frequency_filters": {
"language": 0.9 # 90%+ language match
}
}
)Approach 2: Plugin/Transform (Bring Your Own Embeddings)
Use /holographic endpoints with your existing embedding provider and vector database.
When to use:
- Already using OpenAI, Cohere, or another embedding provider
- Have existing vector database (Pinecone, Weaviate, etc.)
- Don't want to migrate data
- Just need graph-aware transformation or reranking
Endpoints:
POST /v1/holographic/transform- Transform embeddings to graph-awarePOST /v1/holographic/rerank- Rerank search results with domain scoring
# Option A: Transform embeddings before storing
openai_embedding = openai.embeddings.create(
input="def sort_list(items): return sorted(items)"
)
transformed = papr.holographic.transform(
embedding=openai_embedding.data[0].embedding,
frequency_schema_id="cosqa",
content="def sort_list(items): return sorted(items)" # For dimension extraction
)
# Store in your existing vector DB
pinecone.upsert(vectors=[transformed])# Option B: Rerank existing search results
initial_results = your_search_system.search("How to sort a list?")
reranked = papr.holographic.rerank(
query="How to sort a list?",
results=[{"content": r.content, "score": r.score} for r in initial_results],
frequency_schema_id="cosqa"
)Built-in schemas:
cosqa- Code search (+5.5% NDCG@10 vs baseline)scifact- Scientific claims (+36% NDCG@10 vs baseline)general- Mixed content domains
Custom domains:
- Register with
POST /v1/holographic/domains - Define 14 frequency bands for your domain
- Examples: legal contracts, medical records, financial analysis
Get started:
Knowledge Graphs
Map entity relationships and patterns beyond vector similarity.
Extract entities and relationships from your data into a queryable graph. Find connections, run analytics, detect fraud, build recommendations.
When to use:
- Need to understand relationships, not just similarity
- Fraud detection (suspicious connection patterns)
- Recommendation systems (user-item relationships)
- Knowledge bases with structured ontologies
- Multi-hop reasoning and analytics
Key capabilities:
- Auto-extract entities from memories (
memory_policy.mode=auto) - Custom domain schemas (
POST /v1/schemas) - GraphQL queries for analytics and aggregations
- Multi-hop graph-aware search
- Relationship traversal and pattern matching
Get started:
Document Intelligence
Extract structure and meaning from PDFs, Word docs, and images.
Intelligent document parsing that understands tables, figures, hierarchies, and context. Works standalone or integrates with memory and graph products.
When to use:
- Document Q&A systems
- Contract analysis, legal documents
- Research paper extraction
- PDF/Word/image understanding
- Intelligent chunking and classification
Key capabilities:
- Upload documents:
POST /v1/document - Pluggable extraction providers (TensorLake, Reducto, Gemini Vision)
- Automatic metadata extraction and structure recognition
- Optional memory creation (integrate with vector/graph)
- WebSocket status updates for large documents
Integration options:
- Standalone: Extract structured data and handle it yourself
- + Vector Search: Automatic searchable memory from documents
- + Knowledge Graphs: Extract entities and relationships from documents
Get started:
Chat Memory & Compression
Store conversations with automatic compression for long-running context.
Session-based conversation storage that automatically compresses long histories and extracts important facts into long-term memory.
When to use:
- Building chatbots or conversational AI
- Session-based conversation storage
- Automatic context compression for long chats
- Extract facts from conversations into permanent memory
Key capabilities:
- Store messages:
POST /v1/messages - Automatic compression every 15 messages
- Extract memories from conversations (
process_messages=true) - Session management and status tracking
- Retrieve conversation history (newest-first)
Works with:
- Vector Search: Cross-session recall of user preferences
- Knowledge Graphs: Entity extraction from conversations
Get started:
Common use cases
Conversational AI with long-term memory
Products: Chat Memory + Graph-Aware Vector Search
Store conversations with automatic compression and search across sessions with domain-aware precision.
# Store current conversation with compression
client.messages.store(
content="I prefer dark mode and email notifications",
role="user",
session_id="conv_123",
process_messages=True # Extracts preferences as long-term memories
)
# Later, search across all sessions
prefs = client.memory.search(
query="What are this user's preferences?",
external_user_id="user_123"
)Document Q&A with entity relationships
Products: Document Intelligence + Knowledge Graphs
Extract from PDFs and query with entity awareness and relationship mapping.
# Upload document (extracts structure and entities)
doc = client.document.upload(
file_path="contract.pdf",
enable_graph=True # Extract entity relationships
)
# Search with graph-aware context
results = client.memory.search(
query="What are Acme Corp's obligations?",
enable_agentic_graph=True
)Code search with language filtering
Products: Graph-Aware Vector Search
Two approaches depending on your existing infrastructure:
Approach 1: Using Papr storage (/v1/memory with enable_holographic=true)
# Index code with graph-aware embeddings
client.memory.add(
content="def sort_list(items): return sorted(items)",
enable_holographic=True,
frequency_schema_id="cosqa"
)
# Search with language filtering
results = client.memory.search(
query="How do I sort a list?",
holographic_config={
"enabled": True,
"frequency_schema_id": "cosqa",
"frequency_filters": {
"language": 0.9 # 90%+ must be same language
}
}
)Approach 2: Plugin with existing embeddings (/v1/holographic/transform)
# Transform your existing OpenAI embeddings
openai_emb = openai.embeddings.create(input="def sort_list(items): return sorted(items)")
transformed = client.holographic.transform(
embedding=openai_emb.data[0].embedding,
frequency_schema_id="cosqa",
content="def sort_list(items): return sorted(items)"
)
# Store in your existing vector DB (Pinecone, Weaviate, etc.)
your_vector_db.upsert(transformed)Fraud detection with relationship mapping
Products: Knowledge Graphs
Detect suspicious patterns through entity connections and relationship analysis.
# Store transactions with entity extraction
client.memory.add(
content="User A transferred $10k to User B",
memory_policy={"mode": "auto"} # Extract entities
)
# Query suspicious patterns
suspicious = client.graphql.query("""
query SuspiciousPatterns {
users(where: {
transactions_aggregate: { count: { _gt: 10 } },
created_at: { _gte: "2024-01-01" }
}) {
id
transactions { amount, recipient }
}
}
""")Decision flow
What are you building?
"Chatbot with memory"
→ Start with Chat Memory
→ Add Graph-Aware Search for cross-session recall"Document Q&A"
→ Start with Document Intelligence
→ Add Knowledge Graphs for entity relationships"Code search platform"
→ Start with Graph-Aware Search (cosqa schema)"Fraud detection / Recommendations"
→ Start with Knowledge Graphs"Transform existing embeddings"
→ Use Graph-Aware Search BYOE transform
Next steps:
Additional capabilities
AI Model Proxy - Unified API for OpenAI, Anthropic, Google with usage tracking
Learn more →
Sync & Portability - Local/cloud sync and OMO export/import
Learn more →