Last updated

OpenClaw Integration

Use Papr as the memory layer for OpenClaw agents so chat, documents, and structured events remain connected across runs.

What You Build

  • A Papr-backed memory adapter for OpenClaw workflows
  • User-scoped memory retrieval before each task
  • Post-task writeback for new facts and agent learnings

Prerequisites

  • Papr API key in PAPR_MEMORY_API_KEY
  • OpenClaw runtime with configurable memory adapter
  • Stable external_user_id from your application identity

Integration Pattern

  1. Pre-task retrieval: query Papr with the current goal.
  2. Execution: run OpenClaw task using retrieved context.
  3. Post-task writeback: store outcomes, decisions, and extracted entities.
  4. Optional analytics: run GraphQL queries for relationship-level insights.

Minimal Setup

  1. Configure a Papr adapter in your OpenClaw runtime.
  2. Call search before each task to preload memory context.
  3. Call add after each task to persist outcomes.
  4. Validate retrieval quality with a repeated test scenario.

Python Adapter Skeleton

import os
from dataclasses import dataclass
from papr_memory import Papr


@dataclass
class OpenClawContext:
    external_user_id: str
    organization_id: str
    namespace_id: str


class PaprOpenClawMemoryAdapter:
    def __init__(self) -> None:
        self.client = Papr(x_api_key=os.environ.get("PAPR_MEMORY_API_KEY"))

    def pre_task_context(self, goal: str, ctx: OpenClawContext):
        return self.client.memory.search(
            query=goal,
            external_user_id=ctx.external_user_id,
            organization_id=ctx.organization_id,
            namespace_id=ctx.namespace_id,
            enable_agentic_graph=True,
            max_memories=20,
            max_nodes=15,
        )

    def post_task_learning(self, summary: str, ctx: OpenClawContext):
        return self.client.memory.add(
            content=summary,
            external_user_id=ctx.external_user_id,
            organization_id=ctx.organization_id,
            namespace_id=ctx.namespace_id,
            metadata={"role": "assistant", "category": "learning"},
            memory_policy={"mode": "auto"},
        )

Production Settings

  • Use one namespace per environment (dev, staging, prod)
  • Keep write payloads short and factual to improve retrieval precision
  • Store task IDs in metadata for reproducible debugging
  • Keep enable_agentic_graph=true for multi-hop retrieval paths

Validation Checklist

  • Retrieval returns relevant context before task execution
  • Post-task writes produce new searchable memory within expected latency
  • Cross-run sessions preserve agent learning for the same user
  • Tenant boundaries are respected across namespaces

Troubleshooting

If retrieval is weak or writes do not appear, confirm payload scope fields and use Error Playbook.