Overview & Products Page Updates - Graph-Aware Focus
Summary
Restructured the documentation to focus on graph-aware embeddings as the core value proposition, with clear documentation of both integration approaches (full integration vs. plugin).
Changes Made
1. Navigation (redocly.yaml)
- ✅ Logo now links to
https://papr.ai/landing - ✅ Added "Products" link to navbar
- ✅ Added "Dashboard Login" link to navbar
- ✅ Removed pricing page
2. Overview Page (overview/index.md)
Structure: Clean, card-based layout similar to Stripe
Common use cases (8 cards):
- Document Q&A
- Add graph-aware search to your stack (moved up, strengthened messaging)
- Conversational AI with memory
- Code search
- Fraud detection (NEW)
- Recommendation systems (NEW)
- Knowledge management for teams
- Scientific claim verification
Products (simplified to 4 main products):
- Graph-Aware Vector Search
- Knowledge Graphs
- Document Intelligence
- Chat Memory & Compression
Other sections: Start building (4 cards), Deployment options (3 cards), More resources (6 cards)
Removed: "How it works" section (moved to guides)
3. Products Page (overview/products.md)
Completely restructured to focus on 4 main products with clear documentation of integration approaches.
Graph-Aware Vector Search
Two Integration Approaches Documented:
Approach 1: Papr Storage (Full Integration)
- Use when: Building new apps, want full memory management
- Endpoints:
POST /v1/memorywithenable_holographic=true - Example: Add/search with graph-aware embeddings enabled
# Add
papr.memory.add(
content="...",
enable_holographic=True,
frequency_schema_id="cosqa"
)
# Search
results = papr.memory.search(
query="...",
holographic_config={
"enabled": True,
"frequency_schema_id": "cosqa",
"frequency_filters": {"language": 0.9}
}
)Approach 2: Plugin/Transform (BYOE)
- Use when: Existing embeddings (OpenAI, Cohere), don't want to migrate
- Endpoints:
POST /v1/holographic/transform- Transform embeddingsPOST /v1/holographic/rerank- Rerank search results
- Two options documented:
- Option A: Transform embeddings before storing in your vector DB
- Option B: Rerank existing search results
# Transform
transformed = papr.holographic.transform(
embedding=openai_embedding,
frequency_schema_id="cosqa",
content="..."
)
# Rerank
reranked = papr.holographic.rerank(
query="...",
results=your_results,
frequency_schema_id="cosqa"
)Knowledge Graphs
- Focus on fraud detection, recommendations, relationship mapping
- Clear "When to use" and key capabilities
Document Intelligence
- Three integration options: standalone, + vector search, + knowledge graphs
- Pluggable extraction providers
Chat Memory & Compression
- Session-based storage with automatic compression
- Works with vector search for cross-session recall
4. Use Case Examples Updated
Code search example now shows both approaches:
- Approach 1: Using Papr storage with
enable_holographic=true - Approach 2: Plugin with existing embeddings using
/v1/holographic/transform
Key Messaging
What Changed
- ❌ Before: "7 products" messaging (feature counting)
- ✅ After: Clear focus on 4 main products with graph-aware as differentiation
Integration Clarity
- ✅ Two clear paths: Full integration OR plugin approach
- ✅ When to use each: Documented decision criteria
- ✅ All endpoints documented: Both
/v1/memoryand/v1/holographicapproaches - ✅ Code examples: Show both paths side-by-side
Simplification
- Removed complex product matrices and dependency diagrams
- Removed "How it works" from overview (lives in guides)
- Focused on action-oriented content (what you can build)
- Card-based layout for scannability
Validation
- ✅ Dev server running at http://127.0.0.1:4000
- ✅ 0 markdoc errors
- ✅ All changes hot-reloaded and visible
- ✅ 467 total pages
- ⚠️ 83 pre-existing broken links (not introduced by these changes)
Next Steps (If Approved)
- Review in browser at http://127.0.0.1:4000
- Commit changes with descriptive message
- Consider updating graph-aware-embeddings.md guide to match this structure
- Update decision tree to reflect new product focus