Portability and Sync
Papr supports memory portability through OMO (Open Memory Object) and intelligent synchronization through predictive Sync APIs.
Use Cases
- Edge/Local AI: Cache predictive context locally for <150ms retrieval without network calls
- Offline-First Apps: Sync hot memories to device for offline operation
- Migration: Export entire memory graph for portability
- Multi-Environment: Keep edge, cloud, and local caches in sync
OMO Portability
Use the OMO (Open Memory Object) standard for migration and interoperability:
# Export entire memory graph
curl -X POST "https://memory.papr.ai/v1/omo/export" \
-H "X-API-Key: $PAPR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"memory_ids": [],
"workspace_id": "ws_001",
"format": "json",
"include_graph": true
}'
# Import OMO data
curl -X POST "https://memory.papr.ai/v1/omo/import" \
-H "X-API-Key: $PAPR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"omo_data": { ... },
"merge_strategy": "upsert"
}'Sync APIs
Tiered Sync: Predictive + Hot Memories
The /v1/sync/tiers endpoint returns two types of context:
Tier 0 (Predictive): Context you'll likely need next, ranked by:
- 60% semantic similarity to goals/OKRs
- 30% transition probability (multi-step predictions)
- 20% access frequency (log-normalized)
Tier 1 (Hot): Recently accessed memories, citation-ranked by retrieval logs
curl -X POST "https://memory.papr.ai/v1/sync/tiers" \
-H "X-API-Key: $PAPR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"max_tier0": 300,
"max_tier1": 1000,
"include_embeddings": true,
"embedding_format": "int8",
"workspace_id": "ws_001"
}'Response:
{
"code": 200,
"status": "success",
"tier0": [
{
"id": "mem_abc123",
"content": "Customer onboarding flow requires...",
"type": "text",
"predicted_importance": 0.87,
"behavioral_score": 0.42,
"topics": ["onboarding", "customer-experience"],
"embedding": [...]
}
],
"tier1": [
{
"id": "mem_def456",
"content": "Bug fix for login timeout issue",
"type": "text",
"relevance_score": 0.92,
"citation_count": 15,
"last_retrieved": "2024-02-10T15:30:00Z"
}
],
"transitions": [
{
"from": "mem_abc123",
"to": "mem_xyz789",
"probability": 0.73,
"reason": "frequent_co_retrieval"
}
],
"next_cursor": null,
"has_more": false
}Key Fields:
predicted_importance: Combined score (vector + transition + hotness)behavioral_score: Just behavioral components (transition + hotness)- Use this to add fresh query similarity:
0.6 * current_similarity + behavioral_score
- Use this to add fresh query similarity:
embedding: Optional vector (int8 or float32) for local similarity searchtransitions: Predicted context relationships for prefetching
Delta Sync: Incremental Updates
Poll for changes since last sync:
curl -X GET "https://memory.papr.ai/v1/sync/delta?cursor=$CURSOR&limit=200&workspace_id=ws_001" \
-H "X-API-Key: $PAPR_API_KEY"Response:
{
"code": 200,
"status": "success",
"upserts": [
{
"id": "mem_new123",
"content": "...",
"updated_at": "2024-02-11T10:00:00Z"
}
],
"deletes": ["mem_old456"],
"next_cursor": "cursor_xyz",
"has_more": true
}Sync Workflow
1. Initial Warm Start
# Fetch predictive tiers
response=$(curl -X POST "https://memory.papr.ai/v1/sync/tiers" \
-H "X-API-Key: $PAPR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"max_tier0": 300,
"max_tier1": 1000,
"include_embeddings": true
}')
# Store cursor for delta sync
cursor=$(echo $response | jq -r '.next_cursor')
# Cache locally
echo $response | jq -r '.tier0[], .tier1[]' | while read memory; do
# Store in local cache (Redis, SQLite, etc.)
cache_store "$memory"
done2. Periodic Delta Poll
# Poll every 5 minutes
while true; do
delta=$(curl -X GET "https://memory.papr.ai/v1/sync/delta?cursor=$cursor&limit=200" \
-H "X-API-Key: $PAPR_API_KEY")
# Apply upserts
echo $delta | jq -r '.upserts[]' | while read memory; do
cache_upsert "$memory"
done
# Apply deletes
echo $delta | jq -r '.deletes[]' | while read id; do
cache_delete "$id"
done
# Update cursor
cursor=$(echo $delta | jq -r '.next_cursor')
sleep 300
donePython SDK
The Python SDK simplifies sync operations:
from papr_memory import Papr
client = Papr(x_api_key="your-api-key")
# Initial sync
sync_response = client.sync.tiers(
max_tier0=300,
max_tier1=1000,
include_embeddings=True,
embedding_format="int8"
)
# Cache tier0 (predictive) and tier1 (hot)
for memory in sync_response.tier0 + sync_response.tier1:
local_cache.store(memory.id, memory)
# Store cursor for delta sync
cursor = sync_response.next_cursor
# Delta sync loop
while True:
delta_response = client.sync.delta(
cursor=cursor,
limit=200
)
# Apply changes
for memory in delta_response.upserts:
local_cache.upsert(memory.id, memory)
for memory_id in delta_response.deletes:
local_cache.delete(memory_id)
cursor = delta_response.next_cursor
time.sleep(300) # Poll every 5 minutesPerformance Tips
Use INT8 Embeddings: 4x smaller than float32, faster network transfer
client.sync.tiers(embedding_format="int8")Limit Embedding Count: Only embed most important items
client.sync.tiers( max_tier0=300, embed_limit=100 # Only top 100 get embeddings )Cache Transitions: Prefetch predicted next-context
for transition in sync_response.transitions: if transition.probability > 0.7: prefetch(transition.to)Batch Delta Updates: Process deltas in batches
if len(delta_response.upserts) > 50: local_cache.batch_upsert(delta_response.upserts)
Why Predictive Sync Works
The Tier 0 ranking combines three signals:
- Vector Similarity (60%): Semantic relevance to your goals
- Transition Probability (30%): Predicted next-context from 30 days of retrieval logs
- Normalized Hotness (20%): Log-normalized access frequency
This creates a positive feedback loop: more usage → better predictions → higher cache hit rates → lower latency.
See Predictive Memory for details on the scoring algorithm.
Related
- Predictive Memory - How predictive ranking works
- Capability Matrix - All PAPR capabilities
- Python SDK - Full SDK documentation
- Integrations - Framework integration patterns