Install
openclaw skills install openviking-setupSet up OpenViking context database for OpenClaw agents. OpenViking is an open-source context database designed specifically for AI agents with filesystem-bas...
openclaw skills install openviking-setupOpenViking brings filesystem-based memory management to AI agents with tiered context loading and self-evolving memory. This skill guides you through installation and configuration.
# Python package
pip install openviking --upgrade --force-reinstall
# CLI tool
curl -fsSL https://raw.githubusercontent.com/volcengine/OpenViking/main/crates/ov_cli/install.sh | bash
Create ~/.openviking/ov.conf:
{
"storage": {
"workspace": "/home/your-name/openviking_workspace"
},
"log": {
"level": "INFO",
"output": "stdout"
},
"embedding": {
"dense": {
"api_base": "https://api.openai.com/v1",
"api_key": "your-openai-api-key",
"provider": "openai",
"dimension": 1536,
"model": "text-embedding-3-small"
},
"max_concurrent": 10
},
"vlm": {
"api_base": "https://api.openai.com/v1",
"api_key": "your-openai-api-key",
"provider": "openai",
"model": "gpt-4o",
"max_concurrent": 100
}
}
OpenViking supports multiple VLM providers:
| Provider | Model Example | Notes |
|---|---|---|
| openai | gpt-4o | Official OpenAI API |
| volcengine | doubao-seed-2-0-pro | Volcengine Doubao |
| litellm | claude-3-5-sonnet | Unified access (Anthropic, DeepSeek, Gemini, etc.) |
For LiteLLM (recommended for flexibility):
{
"vlm": {
"provider": "litellm",
"model": "claude-3-5-sonnet-20241022",
"api_key": "your-anthropic-key"
}
}
For Ollama (local models):
{
"vlm": {
"provider": "litellm",
"model": "ollama/llama3.1",
"api_base": "http://localhost:11434"
}
}
OpenViking has a native OpenClaw plugin for seamless integration:
# Install OpenClaw plugin
pip install openviking-openclaw
# Or from source
git clone https://github.com/volcengine/OpenViking
cd OpenViking/plugins/openclaw
pip install -e .
Add to your OpenClaw config:
# ~/.openclaw/config.yaml
memory:
provider: openviking
config:
workspace: ~/.openviking/workspace
tiers:
l0:
max_tokens: 4000
auto_flush: true
l1:
max_tokens: 16000
compression: true
l2:
max_tokens: 100000
archive: true
| Tier | Purpose | Token Budget | Behavior |
|---|---|---|---|
| L0 | Active working memory | 4K tokens | Always loaded, fast access |
| L1 | Frequently accessed | 16K tokens | Compressed, on-demand |
| L2 | Archive/cold storage | 100K+ tokens | Semantic search only |
~/.openviking/
├── ov.conf # Configuration
└── workspace/
├── memories/
│ ├── sessions/ # L0: Active session memory
│ ├── compressed/ # L1: Compressed memories
│ └── archive/ # L2: Long-term storage
├── resources/ # Files, documents, assets
└── skills/ # Skill-specific context
from openviking import MemoryStore
store = MemoryStore()
# Add to L0
store.add_memory(
content="User prefers Portuguese language responses",
metadata={"tier": "l0", "category": "preference"}
)
# Add resource
store.add_resource(
path="project_spec.md",
content=open("project_spec.md").read()
)
# Semantic search across all tiers
results = store.search(
query="user preferences",
tiers=["l0", "l1", "l2"],
limit=10
)
# Directory-based retrieval (more precise)
results = store.retrieve(
path="memories/sessions/2026-03-16/",
recursive=True
)
# Trigger manual compaction
store.compact()
# View compaction status
status = store.status()
print(f"L0: {status.l0_tokens}/{status.l0_max}")
print(f"L1: {status.l1_tokens}/{status.l1_max}")
"No module named 'openviking'"
pip install --user openviking"Embedding model not found"
ov.conf has correct provider and model"L0 overflow"
l0.max_tokens in configstore.compact()"Slow retrieval from L2"
After setup, your agent gains:
The more your agent works, the more context it retains—without token bloat.