Last updated

Homepage Messaging Update: "Turn Data Into Intelligence"

Current State

Headline: "The memory layer that turns AI agents from forgetful assistants into intelligent systems."

Problem:

  • Too technical ("memory layer")
  • Focuses on agent problems (hallucination, recall)
  • Not aligned with papr.ai/landing messaging

Proposed Changes

Hero Section

New Headline:

Turn Data Into Intelligence

New Subheadline:

Seven products that connect your conversations, documents, and knowledge—
so AI delivers insights, not just answers.

Why this works:

  • "Turn Data Into Intelligence" = outcome-focused, matches landing page
  • "Seven products" = clear offering
  • "Insights, not just answers" = differentiation

Opening Section (Replace "Why Papr?")

Section Title: "From Data to Intelligence"

Body:

Your data is everywhere—conversations in Slack, documents in Drive, code in GitHub, 
tickets in Linear. AI can't connect the dots because each source is a silo.

**Papr transforms scattered data into connected intelligence** through seven products 
that work standalone or together:

- 📊 **Vector Memory** - Semantic search over any content
- 💬 **Chat Memory** - Conversation storage and compression
- 📄 **Document Intelligence** - Extract structure from PDFs and docs
- 🔗 **Knowledge Graphs** - Map relationships, not just similarity
- 🎯 **Graph-Aware Search** - Domain-tuned retrieval (code, science, etc.)
- 🤖 **AI Model Proxy** - Unified multi-model API
- 🔄 **Sync & Portability** - Local/cloud memory sync

[Explore products →](./products.md) | [Quick start →](../quickstart/index.md)

Products Section (New, After Opening)

<Section id="products">
  <SectionTitle>Seven Products, One Platform</SectionTitle>
  <SectionSubtitle>
    Use standalone or combine them. Start simple, add intelligence as you need it.
  </SectionSubtitle>
  
  <Cards columns={3}>
    <Card
      title="Vector Memory"
      badge="Core"
      description="Semantic search and RAG foundation. Store memories, retrieve with natural language."
      link="/overview/products#vector-memory"
    />
    <Card
      title="Chat Memory"
      badge="Core"
      description="Conversation storage with automatic compression. Session management built-in."
      link="/overview/products#chat-memory"
    />
    <Card
      title="Document Intelligence"
      badge="Core"
      description="Extract structured information from PDFs, contracts, research papers."
      link="/overview/products#document-intelligence"
    />
    <Card
      title="Knowledge Graphs"
      badge="Addon"
      description="Map relationships between entities. Query with GraphQL. Build domain ontologies."
      link="/overview/products#knowledge-graphs"
    />
    <Card
      title="Graph-Aware Search"
      badge="Addon"
      description="Domain-tuned embeddings. Filter by language, topic, evidence type. +36% accuracy for science."
      link="/overview/products#graph-aware-search"
    />
    <Card
      title="AI Model Proxy"
      badge="Core"
      description="Call OpenAI, Anthropic, Google through one API. Track costs across providers."
      link="/overview/products#ai-model-proxy"
    />
  </Cards>
  
  <CallToAction>
    <Button href="/overview/products">Compare all products →</Button>
  </CallToAction>
</Section>

Intelligence Outcomes Section (New, Before "What You Can Build")

<Section id="outcomes" backgroundColor="light">
  <SectionTitle>From Fragments to Intelligence</SectionTitle>
  
  <Grid columns={2}>
    <Feature
      icon="🔍"
      title="Search that understands"
      description="Not just keywords—semantic search across conversations, docs, and code. 91%+ accuracy on Stanford's STaRK benchmark."
    />
    <Feature
      icon="🔗"
      title="Connected context"
      description="Code → ticket → conversation → decision. Your knowledge becomes one connected story, not isolated pieces."
    />
    <Feature
      icon="⚡"
      title="Predictive intelligence"
      description="Anticipates what users need before they ask. Pre-caches context for <150ms retrieval (when cached)."
    />
    <Feature
      icon="🎯"
      title="Domain-aware precision"
      description="Filter by programming language, scientific field, or custom dimensions. Search returns what you actually need."
    />
    <Feature
      icon="📊"
      title="Analytics and insights"
      description="Query relationships with GraphQL. Find patterns across your data. Answer 'why' not just 'what'."
    />
    <Feature
      icon="🔒"
      title="Enterprise-ready security"
      description="Built-in ACLs, namespace isolation, compliance controls. Data never leaks across users."
    />
  </Grid>
</Section>

How It Works Section (Simplified)

Title: "Data In, Intelligence Out"

Simplified messaging:

Three ways to send data → One intelligence layer → Two ways to query

**Input:**
1. Documents (PDFs, Word docs)
2. Conversations (chat messages, sessions)
3. Direct memories (explicit data)

**Intelligence Layer:**
- Vector embeddings (semantic understanding)
- Knowledge graphs (relationship mapping)
- Predictive caching (anticipate needs)
- Domain tuning (specialized search)

**Output:**
1. Natural language search (ask questions, get insights)
2. GraphQL analytics (structured queries, aggregations)

[See architecture →](../concepts/architecture.md)

Social Proof Section (New, After Products)

<Section id="proof">
  <SectionTitle>Trusted by AI Teams</SectionTitle>
  
  <Stats>
    <Stat
      number="#1"
      label="Stanford STaRK Benchmark"
      description="91%+ retrieval accuracy"
    />
    <Stat
      number="<150ms"
      label="Response time"
      description="With predictive caching"
    />
    <Stat
      number="7"
      label="Products"
      description="Standalone or combined"
    />
  </Stats>
  
  <Testimonial
    quote="Papr turned our scattered documentation into an intelligent knowledge base. 
           Our support team now surfaces answers 3x faster."
    author="Engineering Lead"
    company="Enterprise SaaS"
  />
</Section>

Updated "Start Here" Section

Replace "Evaluate Fit / Start Building" with:

## Start Here

### 1. Choose Your Path

**Building something specific?**
- [Document Q&A](../tutorials/document-qa.md) - Extract + search PDFs
- [Conversational AI](../tutorials/chat-history.md) - Chat with memory
- [Code Search](../guides/graph-aware-embeddings.md) - Find code by intent
- [Knowledge Management](../tutorials/enterprise-saas.md) - Multi-tenant intelligence

**Just exploring?**
- [Products Overview](./products.md) - See all seven products
- [Quick Start (5 min)](../quickstart/index.md) - Ship a prototype
- [Decision Tree](./decision-tree.md) - Which products do you need?

### 2. Integrate

- [TypeScript SDK](../sdks/typescript.md)
- [Python SDK](../sdks/python.md)
- [REST API Reference](../apis/index.yaml)
- [LangChain / LlamaIndex](../examples/index.md)

Complete Updated Overview/Index.md

# Turn Data Into Intelligence

**Seven products that connect your conversations, documents, and knowledge—so AI delivers insights, not just answers.**

> **TL;DR:** Store with `POST /v1/memory`. Search with `POST /v1/memory/search`.  
> Ranked **[#1 on Stanford's STaRK benchmark](https://huggingface.co/spaces/snap-stanford/stark-leaderboard)** 
> with **91%+ accuracy** and **<150ms retrieval** (when cached).

---

## From Data to Intelligence

Your data is everywhere—conversations in Slack, documents in Drive, code in GitHub, tickets in Linear. AI can't connect the dots because each source is a silo.

**Papr transforms scattered data into connected intelligence** through seven products that work standalone or together:

- 📊 **Vector Memory** - Semantic search over any content
- 💬 **Chat Memory** - Conversation storage and compression
- 📄 **Document Intelligence** - Extract structure from PDFs and docs
- 🔗 **Knowledge Graphs** - Map relationships, not just similarity
- 🎯 **Graph-Aware Search** - Domain-tuned retrieval (code, science, etc.)
- 🤖 **AI Model Proxy** - Unified multi-model API
- 🔄 **Sync & Portability** - Local/cloud memory sync

[Explore all products →](./products.md) | [Quick start (5 min) →](../quickstart/index.md)

---

## Intelligence Outcomes

### Search that understands
Not just keywords—semantic search across conversations, docs, and code. **91%+ accuracy** on Stanford's STaRK benchmark.

### Connected context
Code → ticket → conversation → decision. Your knowledge becomes **one connected story**, not isolated pieces.

### Predictive intelligence
Anticipates what users need before they ask. Pre-caches context for **<150ms retrieval** (when cached).

### Domain-aware precision
Filter by programming language, scientific field, or custom dimensions. Search returns **what you actually need**.

### Analytics and insights
Query relationships with GraphQL. Find patterns across your data. Answer **"why"** not just **"what"**.

### Enterprise-ready security
Built-in ACLs, namespace isolation, compliance controls. **Data never leaks** across users.

---

## Seven Products, One Platform

Use standalone or combine them. Start simple, add intelligence as you need it.

### Core Products

**[Vector Memory](./products.md#vector-memory)** - Semantic search and RAG foundation  
**[Chat Memory](./products.md#chat-memory)** - Conversation storage with compression  
**[Document Intelligence](./products.md#document-intelligence)** - Extract from PDFs and docs  
**[AI Model Proxy](./products.md#ai-model-proxy)** - Unified multi-model API

### Add Intelligence

**[Knowledge Graphs](./products.md#knowledge-graphs)** - Map entity relationships (addon)  
**[Graph-Aware Search](./products.md#graph-aware-search)** - Domain-tuned retrieval (addon)  
**[Sync & Portability](./products.md#sync--portability)** - Local/cloud sync (feature)

[Compare all products →](./products.md)

---

## Data In, Intelligence Out

**Three ways to send data:**
1. Documents (PDFs, Word docs) - `POST /v1/document`
2. Conversations (chat messages) - `POST /v1/messages`
3. Direct memories (explicit data) - `POST /v1/memory`

**Intelligence layer transforms it:**
- Vector embeddings (semantic understanding)
- Knowledge graphs (relationship mapping)
- Predictive caching (anticipate needs)
- Domain tuning (specialized search)

**Two ways to query:**
1. Natural language search - `POST /v1/memory/search`
2. GraphQL analytics - `POST /v1/graphql`

[See architecture →](../concepts/architecture.md) | [API reference →](../apis/index.yaml)

---

## What You Can Build

[Personal AI Assistant](../tutorials/chat-history.md) - Store/retrieve conversations  
[Document Q&A](../tutorials/document-qa.md) - Intelligent document chat  
[Customer Support](../tutorials/customer-experience.md) - Answer FAQs, resolve tickets  
[Enterprise Knowledge](../tutorials/enterprise-saas.md) - Multi-tenant intelligence  
[Code Search](../guides/graph-aware-embeddings.md) - Find code by natural language  
[Domain Ontologies](../guides/custom-schemas.md) - Custom knowledge graphs  
[Graph Analytics](../guides/graphql-analysis.md) - Query insights with GraphQL

---

## Start Here

### 1. Choose Your Path

**Building something specific?**
- [Document Q&A](../tutorials/document-qa.md) - Extract + search PDFs
- [Conversational AI](../tutorials/chat-history.md) - Chat with memory
- [Code Search](../guides/graph-aware-embeddings.md) - Find code by intent
- [Knowledge Management](../tutorials/enterprise-saas.md) - Multi-tenant intelligence

**Just exploring?**
- [Products Overview](./products.md) - See all seven products
- [Quick Start (5 min)](../quickstart/index.md) - Ship a prototype
- [Decision Tree](./decision-tree.md) - Which products do you need?

### 2. Integrate

- [TypeScript SDK](../sdks/typescript.md) | [Python SDK](../sdks/python.md)
- [REST API Reference](../apis/index.yaml)
- [LangChain / LlamaIndex Examples](../examples/index.md)

---

## Deployment Options

**Papr Cloud** - Managed service, 5-minute setup  
[Get started →](../quickstart/index.md) | [Learn more →](../deployment/cloud.md)

**Hybrid Cloud** - Your infrastructure, we manage it  
[Enterprise →](../deployment/hybrid.md) | [Talk to sales →](https://calendly.com/amirkabbara/30min)

**Self-Hosted** - Open source, full control  
[Setup →](../deployment/self-hosted.md) | [GitHub →](https://github.com/Papr-ai/memory-opensource)

All deployment options use **identical APIs**. Code written for one works with all three.

[Compare deployments →](../deployment/index.md)

---

## Why Teams Choose Papr

**Instead of building:**
- Basic RAG (70-80% accuracy)
- Manual fusion of keyword + vector
- Custom knowledge graphs
- Fragmented data sources

**You get:**
- 91%+ retrieval accuracy (#1 benchmark)
- Hybrid search built-in
- Predictive intelligence layer
- Connected context across sources
- Seven products, one API
- Open source + enterprise

[See detailed comparison →](./why-papr.md)

Implementation Priority

Phase 1: Messaging Only (15 min)

  1. Update overview/index.md with new messaging
  2. Test rendering

Phase 2: React Components (1-2 hours)

  1. Update index.page.tsx hero section
  2. Add Products section component
  3. Add Intelligence Outcomes section
  4. Update "How It Works" to be simpler

Phase 3: Polish (30 min)

  1. Add stats/social proof if available
  2. Update homepage diagram
  3. Cross-link products page

Key Messaging Shifts

OldNew
"Memory layer""Turn data into intelligence"
"Forgetful assistants""Scattered data → connected intelligence"
Focus on problems (hallucination)Focus on outcomes (insights, precision)
Technical (memory, RAG)Outcome-driven (intelligence, analytics)
One offering (memory API)Seven products (clear options)
Feature-firstOutcome-first, then products