Last updated

Overview & Products Page Updates - Graph-Aware Focus

Summary

Restructured the documentation to focus on graph-aware embeddings as the core value proposition, with clear documentation of both integration approaches (full integration vs. plugin).


Changes Made

1. Navigation (redocly.yaml)

  • ✅ Logo now links to https://papr.ai/landing
  • ✅ Added "Products" link to navbar
  • ✅ Added "Dashboard Login" link to navbar
  • ✅ Removed pricing page

2. Overview Page (overview/index.md)

Structure: Clean, card-based layout similar to Stripe

Common use cases (8 cards):

  1. Document Q&A
  2. Add graph-aware search to your stack (moved up, strengthened messaging)
  3. Conversational AI with memory
  4. Code search
  5. Fraud detection (NEW)
  6. Recommendation systems (NEW)
  7. Knowledge management for teams
  8. Scientific claim verification

Products (simplified to 4 main products):

  1. Graph-Aware Vector Search
  2. Knowledge Graphs
  3. Document Intelligence
  4. Chat Memory & Compression

Other sections: Start building (4 cards), Deployment options (3 cards), More resources (6 cards)

Removed: "How it works" section (moved to guides)


3. Products Page (overview/products.md)

Completely restructured to focus on 4 main products with clear documentation of integration approaches.

Two Integration Approaches Documented:

Approach 1: Papr Storage (Full Integration)

  • Use when: Building new apps, want full memory management
  • Endpoints: POST /v1/memory with enable_holographic=true
  • Example: Add/search with graph-aware embeddings enabled
# Add
papr.memory.add(
    content="...",
    enable_holographic=True,
    frequency_schema_id="cosqa"
)

# Search
results = papr.memory.search(
    query="...",
    holographic_config={
        "enabled": True,
        "frequency_schema_id": "cosqa",
        "frequency_filters": {"language": 0.9}
    }
)

Approach 2: Plugin/Transform (BYOE)

  • Use when: Existing embeddings (OpenAI, Cohere), don't want to migrate
  • Endpoints:
    • POST /v1/holographic/transform - Transform embeddings
    • POST /v1/holographic/rerank - Rerank search results
  • Two options documented:
    • Option A: Transform embeddings before storing in your vector DB
    • Option B: Rerank existing search results
# Transform
transformed = papr.holographic.transform(
    embedding=openai_embedding,
    frequency_schema_id="cosqa",
    content="..."
)

# Rerank
reranked = papr.holographic.rerank(
    query="...",
    results=your_results,
    frequency_schema_id="cosqa"
)

Knowledge Graphs

  • Focus on fraud detection, recommendations, relationship mapping
  • Clear "When to use" and key capabilities

Document Intelligence

  • Three integration options: standalone, + vector search, + knowledge graphs
  • Pluggable extraction providers

Chat Memory & Compression

  • Session-based storage with automatic compression
  • Works with vector search for cross-session recall

4. Use Case Examples Updated

Code search example now shows both approaches:

  • Approach 1: Using Papr storage with enable_holographic=true
  • Approach 2: Plugin with existing embeddings using /v1/holographic/transform

Key Messaging

What Changed

  • Before: "7 products" messaging (feature counting)
  • After: Clear focus on 4 main products with graph-aware as differentiation

Integration Clarity

  • Two clear paths: Full integration OR plugin approach
  • When to use each: Documented decision criteria
  • All endpoints documented: Both /v1/memory and /v1/holographic approaches
  • Code examples: Show both paths side-by-side

Simplification

  • Removed complex product matrices and dependency diagrams
  • Removed "How it works" from overview (lives in guides)
  • Focused on action-oriented content (what you can build)
  • Card-based layout for scannability

Validation

  • ✅ Dev server running at http://127.0.0.1:4000
  • ✅ 0 markdoc errors
  • ✅ All changes hot-reloaded and visible
  • ✅ 467 total pages
  • ⚠️ 83 pre-existing broken links (not introduced by these changes)

Next Steps (If Approved)

  1. Review in browser at http://127.0.0.1:4000
  2. Commit changes with descriptive message
  3. Consider updating graph-aware-embeddings.md guide to match this structure
  4. Update decision tree to reflect new product focus