Last updated

Batch Ingestion

Ingest large memory datasets efficiently with batch APIs and idempotent patterns.

What You Will Build

  • Batch ingestion pipeline
  • Scoping and policy defaults at batch level
  • Retry-safe write strategy

Step 1: Prepare Batch Payload

payload = {
    "external_user_id": "data_pipeline_001",
    "organization_id": "org_demo",
    "namespace_id": "ns_prod",
    "batch_size": 10,
    "memory_policy": {
        "mode": "auto",
        "consent": "terms",
        "risk": "none"
    },
    "memories": [
        {"content": "Ticket 1001: SSO timeout for enterprise customer", "type": "text"},
        {"content": "Ticket 1002: Billing export mismatch in monthly report", "type": "text"}
    ]
}

Step 2: Execute Batch Write

result = client.memory.add_batch(**payload)
print(result)

Step 3: Verify Retrieval

verification = client.memory.search(
    query="Recent enterprise support issues",
    external_user_id="data_pipeline_001",
    enable_agentic_graph=True
)

Reliability Practices

  • Keep source record IDs in metadata for dedupe checks.
  • Retry only failed batch segments.
  • Use webhook callback for long-running batch confirmation when available.

Next Steps