Capability Matrix
This page maps what you are trying to build to the exact Papr API capabilities you should use.
How to Use This Matrix
- Start from your job-to-be-done.
- Implement the required endpoints first.
- Add optional capabilities for quality, governance, and scale.
Core Capability Mapping
| Job to Be Done | Required Endpoints | Recommended Options | Key Fields to Set |
|---|---|---|---|
| Store and retrieve long-term memory | POST /v1/memory, POST /v1/memory/search | response_format=toon, enable_agentic_graph=true | external_user_id, memory_policy.mode |
| Build chat memory across sessions | POST /v1/messages, GET /v1/messages/sessions/{session_id} | GET /compress, POST /process | sessionId, process_messages, external_user_id |
| Build document memory and retrieval | POST /v1/document, GET /v1/document/status/{upload_id} | DELETE /v1/document/{upload_id} for cancel | organization_id, namespace_id, memory_policy |
| Import structured data into graph | POST /v1/memory or /v1/memory/batch | Custom schema + GraphQL | memory_policy.mode=manual, nodes, relationships |
| Define domain ontology | POST /v1/schemas, GET /v1/schemas/{schema_id} | Apply schema to add/search requests | schema_id, schema memory_policy defaults |
| Run graph analytics | POST /v1/graphql | Combine with schema-scoped retrieval | GraphQL query + variables |
| Enforce tenant and access boundaries | Any write + search endpoint | OMO import/export for portability | organization_id, namespace_id, memory_policy.acl |
| Export/import portable memory | POST /v1/omo/export, POST /v1/omo/import | GET /v1/omo/export.json for file export | Memory IDs, target scope |
| Keep local/edge memory in sync | POST /v1/sync/tiers, GET /v1/sync/delta | include_embeddings=true only when needed | cursor, workspace_id, limit |
Capability Details
1) Memory Lifecycle
- Write:
POST /v1/memory,POST /v1/memory/batch - Read/search:
POST /v1/memory/search,GET /v1/memory/{memory_id} - Update/delete:
PUT /v1/memory/{memory_id},DELETE /v1/memory/{memory_id}
Use this when you want explicit control over what enters memory.
2) Graph Control with memory_policy
Use memory_policy as the single control surface for:
- graph generation mode (
auto,manual) - extraction constraints (
node_constraints,edge_constraints,link_to) - governance (
consent,risk,acl)
For structured ingest from Postgres or business systems, start with mode: "manual".
3) Messages Memory
Messages API gives you:
- sessioned chat storage,
- optional memory extraction from chat (
process_messages), - compression for LLM context (
context_for_llm).
This is the fastest path for chat agents that need continuity.
4) Agentic Retrieval
For high-quality memory retrieval:
- set
enable_agentic_graph: true, - set
max_memoriesandmax_nodeshigh enough for coverage, - use
response_format=toonto reduce LLM token cost.
5) Governance and Multi-Tenant Controls
Scope at request level:
organization_idnamespace_id
Control access at policy level:
memory_policy.aclmemory_policy.consentmemory_policy.risk
Defaults and Pitfalls
Recommended Defaults
- Search:
enable_agentic_graph: truemax_memories: 15-20max_nodes: 10-15
- Messages:
process_messages: trueunless you intentionally want storage-only mode
- Memory policy:
mode: autofor unstructured contentmode: manualfor structured data import
Common Pitfalls
- Using deprecated fields (
graph_generation,user_id,rank_results) in new integrations. - Treating search as vector-only and leaving
enable_agentic_graphdisabled. - Missing tenant scope fields in multi-tenant apps.
- Not defining ACL in sensitive deployments.
Next Steps
- Choose your implementation path in Golden Paths.
- If you are integrating with an AI coding agent, start with Agent Cookbook.