Last updated

Capability Matrix

This page maps what you are trying to build to the exact Papr API capabilities you should use.

How to Use This Matrix

  1. Start from your job-to-be-done.
  2. Implement the required endpoints first.
  3. Add optional capabilities for quality, governance, and scale.

Core Capability Mapping

Job to Be DoneRequired EndpointsRecommended OptionsKey Fields to Set
Store and retrieve long-term memoryPOST /v1/memory, POST /v1/memory/searchresponse_format=toon, enable_agentic_graph=trueexternal_user_id, memory_policy.mode
Build chat memory across sessionsPOST /v1/messages, GET /v1/messages/sessions/{session_id}GET /compress, POST /processsessionId, process_messages, external_user_id
Build document memory and retrievalPOST /v1/document, GET /v1/document/status/{upload_id}DELETE /v1/document/{upload_id} for cancelorganization_id, namespace_id, memory_policy
Import structured data into graphPOST /v1/memory or /v1/memory/batchCustom schema + GraphQLmemory_policy.mode=manual, nodes, relationships
Define domain ontologyPOST /v1/schemas, GET /v1/schemas/{schema_id}Apply schema to add/search requestsschema_id, schema memory_policy defaults
Run graph analyticsPOST /v1/graphqlCombine with schema-scoped retrievalGraphQL query + variables
Enforce tenant and access boundariesAny write + search endpointOMO import/export for portabilityorganization_id, namespace_id, memory_policy.acl
Export/import portable memoryPOST /v1/omo/export, POST /v1/omo/importGET /v1/omo/export.json for file exportMemory IDs, target scope
Keep local/edge memory in syncPOST /v1/sync/tiers, GET /v1/sync/deltainclude_embeddings=true only when neededcursor, workspace_id, limit

Capability Details

1) Memory Lifecycle

  • Write: POST /v1/memory, POST /v1/memory/batch
  • Read/search: POST /v1/memory/search, GET /v1/memory/{memory_id}
  • Update/delete: PUT /v1/memory/{memory_id}, DELETE /v1/memory/{memory_id}

Use this when you want explicit control over what enters memory.

2) Graph Control with memory_policy

Use memory_policy as the single control surface for:

  • graph generation mode (auto, manual)
  • extraction constraints (node_constraints, edge_constraints, link_to)
  • governance (consent, risk, acl)

For structured ingest from Postgres or business systems, start with mode: "manual".

3) Messages Memory

Messages API gives you:

  • sessioned chat storage,
  • optional memory extraction from chat (process_messages),
  • compression for LLM context (context_for_llm).

This is the fastest path for chat agents that need continuity.

4) Agentic Retrieval

For high-quality memory retrieval:

  • set enable_agentic_graph: true,
  • set max_memories and max_nodes high enough for coverage,
  • use response_format=toon to reduce LLM token cost.

5) Governance and Multi-Tenant Controls

Scope at request level:

  • organization_id
  • namespace_id

Control access at policy level:

  • memory_policy.acl
  • memory_policy.consent
  • memory_policy.risk

Defaults and Pitfalls

  • Search:
    • enable_agentic_graph: true
    • max_memories: 15-20
    • max_nodes: 10-15
  • Messages:
    • process_messages: true unless you intentionally want storage-only mode
  • Memory policy:
    • mode: auto for unstructured content
    • mode: manual for structured data import

Common Pitfalls

  • Using deprecated fields (graph_generation, user_id, rank_results) in new integrations.
  • Treating search as vector-only and leaving enable_agentic_graph disabled.
  • Missing tenant scope fields in multi-tenant apps.
  • Not defining ACL in sensitive deployments.

Next Steps