Install
openclaw skills install cc-soulZero-vector AI memory engine with self-learning. LOCOMO 76.2% (4th place). 15 original algorithms, open source (MIT).
openclaw skills install cc-soulA zero-vector AI memory engine that learns and improves from every conversation — no vectors, no embeddings, no GPU.
LOCOMO (Long-term Conversational Memory) — the standard benchmark for AI memory systems:
| Type | Accuracy |
|---|---|
| open_domain | 89.4% |
| single_hop | 84.8% |
| multi_hop | 65.7% |
| temporal_reasoning | 62.5% |
| adversarial | 56.5% |
| TOTAL | 76.2% (4th place globally) |
The only symbolic (non-vector) system in the top 5. All systems above use vector databases + LLM; cc-soul uses pure algorithmic recall.
npm install @cc-soul/openclaw
# API auto-starts at localhost:18800
Verify: curl http://localhost:18800/health
If auto-start didn't work, start manually:
node ~/.openclaw/plugins/cc-soul/cc-soul/soul-api.js
# or: node node_modules/@cc-soul/openclaw/cc-soul/soul-api.js
# custom port: SOUL_PORT=9900 node ~/.openclaw/plugins/cc-soul/cc-soul/soul-api.js
Requires Node.js 20+.
Base URL: http://localhost:18800 (configurable via SOUL_PORT env var)
curl -X POST http://localhost:18800/memories \
-H "Content-Type: application/json" \
-d '{"content": "Alice prefers Python over Java", "user_id": "alice"}'
Response: {"stored": true, "facts_extracted": 2}
curl -X POST http://localhost:18800/search \
-H "Content-Type: application/json" \
-d '{"query": "programming language preference", "user_id": "alice", "top_n": 5}'
Response:
{
"memories": [{"content": "Alice prefers Python over Java", "scope": "fact", "confidence": 0.85}],
"facts": [{"predicate": "prefers", "object": "Python", "confidence": 0.9}],
"fact_summary": "Prefers Python over Java"
}
curl http://localhost:18800/health
Response: {"status": "ok", "version": "3.2.2", "memoryCount": 1234, "factCount": 567}
Create ~/.cc-soul/data/ai_config.json:
{
"backend": "openai-compatible",
"api_base": "https://api.deepseek.com/v1",
"api_key": "your-key-here",
"api_model": "deepseek-chat"
}
Without LLM: core recall works locally in <30ms. With LLM: adds query rewriting + result reranking. Users configure their own API key — cc-soul never provides or manages LLM credentials.
cc-soul builds a word association network from conversations. The more you talk, the smarter recall becomes.
Every memory has a real-time activation score computed from multiple signals:
L1: Raw memories (thousands)
→ every 6h →
L2: Topic nodes (~80, with hit/miss scoring)
→ every 12h →
L3: Mental model (identity / style / facts / dynamics)
Topic nodes that score low (miss > hit) are automatically retired. High-scoring nodes promote to core memory.
Different questions need different strategies:
cc-soul tracks user emotion in real-time across 5 dimensions:
cc-soul dynamically blends personas based on conversation context:
| Persona | Triggers |
|---|---|
| Engineer | Technical questions, code, debugging |
| Friend | Casual chat, personal topics |
| Mentor | Career advice, growth discussions |
| Analyst | Comparisons, data-driven decisions |
| Comforter | Stress, frustration, emotional messages |
| Strategist | Planning, long-term decisions |
| Explorer | Brainstorming, open-ended questions |
| Executor | Task execution, step-by-step guides |
| Teacher | Explanations, learning requests |
| Devil's Advocate | When user needs pushback |
| Socratic | When user says "帮我理解" / "guide me" |
No manual switching needed — persona adapts automatically based on what you're saying.
cc-soul improves itself from every interaction:
The system gets measurably better over time: Hit@3 improves 30% → 67.5% over 1200 messages.
| Metric | Value |
|---|---|
| Recall latency (p50) | 127ms |
| Storage | 5.7 MB (vs 49.2 MB for vectors — 8.6x smaller) |
| External API calls | 0 (pure algorithm) |
| LLM dependency | Optional (recall works without LLM) |
POST /memories, POST /search, GET /health~/.cc-soul/data/ (SQLite)cc-soul is fully open source under MIT license. All source code (TypeScript) is included in this package and on GitHub.
If you have security concerns, read the source. Every line is open.