Last updated

Papr Memory API

The universal memory layer that lets AI apps remember, reason, and grow overnight.

TL;DR: Store with POST /v1/memory. Retrieve with POST /v1/memory/search.
Our purpose-built vector+graph architecture gives you multi-hop reasoning that delivers 46% higher accuracy than OpenAI.

Key Features

  • Real-time content ingestion – text, chat, PDFs via API, connectors, or UI.
  • Multi-hop reasoning - Not just vector+graph bolted together; our system is designed from the ground up for multi-hop reasoning with scalable graph traversal.
  • Dynamic indexing and re-ranking - memories are dynamically indexed overtime and re-ranked for the best results.
  • Granular ACL & sharing – user level filters executed inside the engine with the ability to set permissions and share memories between users and agents.
  • Unified API - Two simple endpoints handle all your memory needs.

How It Works

Two main operations. POST /v1/memory to add memories. POST /v1/memory/search to retrieve with both semantic, relational recall, and re-ranking.

Under the hood Papr optimally chunks, stores, and connects every memory dynamically. During memory retrieval, Papr searches the vector store for fast semantic similarity, graph DB for first-class relationships-based queries, then re-ranks results.


ApplicationMemory APIVector StoreGraph DBDoc ProcessorStore/Retrieve/SearchSemantic SearchRelationship QueryProcess DocumentsRelevant ContentRelated ItemsProcessed TextRich Context & RelationshipsApplicationMemory APIVector StoreGraph DBDoc Processor

What You Can Build

Next Steps