Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

OpenViking Setup

Set up OpenViking context database for OpenClaw agents. OpenViking is an open-source context database designed specifically for AI agents with filesystem-bas...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 18 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill claims to 'set up OpenViking' and the included scripts do perform installation, configuration file creation, workspace setup, and health checks — so capability matches purpose. However, the registry metadata declares no required environment variables or credentials even though both SKILL.md and scripts clearly require API keys (OpenAI/Anthropic/Volcengine) and optional provider config. The mismatch between declared requirements (none) and actual runtime needs is an incoherence and should be flagged.
!
Instruction Scope
Instructions and scripts operate on user home files (~/.openviking/ov.conf and ~/.openclaw/config.yaml), create workspace directories, and prompt for or read API keys from environment variables. That scope is appropriate for a setup tool, but the SKILL.md and setup script also instruct/perform network installs and recommend adding secrets into a plaintext JSON config file in the user's home — both expand the trust surface and should be considered sensitive actions.
!
Install Mechanism
There is no formal install spec in registry metadata, but both SKILL.md and scripts instruct running 'pip install openviking' and a curl | bash command that fetches and executes a script from raw.githubusercontent.com. Fetching and piping remote install scripts to shell is a high-risk pattern (executes remote code). While GitHub raw is a common host, executing its contents without review is risky. The scripts also call subprocess.run(..., shell=True), which increases command-injection exposure if inputs were untrusted.
!
Credentials
The skill metadata lists no required env vars, yet setup.py and the README prompt for provider selection and expect API keys (OPENAI_API_KEY, ANTHROPIC_API_KEY, VOLCENGINE_API_KEY, OPENVIKING_PROVIDER, OPENVIKING_WORKSPACE). Requiring API keys for embedding/VLM access is reasonable for the stated purpose, but the omission from declared requirements is an incoherence. Also, secrets are written directly into ~/.openviking/ov.conf (plaintext JSON), which is expected for local config but has implications for file permissions and leakage.
Persistence & Privilege
The skill does not request 'always: true' or otherwise force inclusion. It creates configuration under the user's home directory and workspace directories and prints instructions for adding OpenViking to the OpenClaw config but does not modify other skill or system-wide configurations. This level of persistence is typical and proportional for a setup tool.
What to consider before installing
This skill appears to implement an OpenViking installer and config generator, but there are three things to watch before running it: (1) the package will ask for and store API keys (OpenAI/Anthropic/Volcengine) even though the registry metadata didn't declare them — confirm you're comfortable providing those keys; (2) it suggests running a curl | bash installer from raw.githubusercontent.com — inspect that script (open the URL in a browser) before executing or prefer installing only from PyPI or cloned source you review; (3) the setup writes your API keys into ~/.openviking/ov.conf in plaintext — ensure file permissions are restrictive (chmod 600) or use a secrets manager if available. Recommended precautions: review the remote install script contents and the included Python scripts, run the setup in an isolated environment (container/VM) if unsure, set minimal-scope API keys, and back up your OpenClaw config before changing it. If you want to proceed but reduce risk, skip the curl|bash step and install openviking only from a vetted source, and manually create the config with appropriately-secured credentials.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
contextvk973pr6vgd3y1w09b60yz14k518312d9databasevk973pr6vgd3y1w09b60yz14k518312d9integrationvk973pr6vgd3y1w09b60yz14k518312d9latestvk973pr6vgd3y1w09b60yz14k518312d9memoryvk973pr6vgd3y1w09b60yz14k518312d9openclawvk973pr6vgd3y1w09b60yz14k518312d9

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

OpenViking Setup for OpenClaw

OpenViking brings filesystem-based memory management to AI agents with tiered context loading and self-evolving memory. This skill guides you through installation and configuration.

What OpenViking Provides

  • Filesystem paradigm: Unified context management (memories, resources, skills)
  • Tiered loading (L0/L1/L2): Load only what's needed, save tokens
  • Self-evolving memory: Gets smarter with use
  • OpenClaw plugin: Native integration available

Prerequisites

  • Python 3.10+
  • Go 1.22+ (for AGFS components)
  • GCC 9+ or Clang 11+ (for core extensions)
  • VLM model access (for image/content understanding)
  • Embedding model access (for vectorization)

Quick Start

Step 1: Install OpenViking

# Python package
pip install openviking --upgrade --force-reinstall

# CLI tool
curl -fsSL https://raw.githubusercontent.com/volcengine/OpenViking/main/crates/ov_cli/install.sh | bash

Step 2: Create Configuration

Create ~/.openviking/ov.conf:

{
  "storage": {
    "workspace": "/home/your-name/openviking_workspace"
  },
  "log": {
    "level": "INFO",
    "output": "stdout"
  },
  "embedding": {
    "dense": {
      "api_base": "https://api.openai.com/v1",
      "api_key": "your-openai-api-key",
      "provider": "openai",
      "dimension": 1536,
      "model": "text-embedding-3-small"
    },
    "max_concurrent": 10
  },
  "vlm": {
    "api_base": "https://api.openai.com/v1",
    "api_key": "your-openai-api-key",
    "provider": "openai",
    "model": "gpt-4o",
    "max_concurrent": 100
  }
}

Step 3: Configure Provider

OpenViking supports multiple VLM providers:

ProviderModel ExampleNotes
openaigpt-4oOfficial OpenAI API
volcenginedoubao-seed-2-0-proVolcengine Doubao
litellmclaude-3-5-sonnetUnified access (Anthropic, DeepSeek, Gemini, etc.)

For LiteLLM (recommended for flexibility):

{
  "vlm": {
    "provider": "litellm",
    "model": "claude-3-5-sonnet-20241022",
    "api_key": "your-anthropic-key"
  }
}

For Ollama (local models):

{
  "vlm": {
    "provider": "litellm",
    "model": "ollama/llama3.1",
    "api_base": "http://localhost:11434"
  }
}

OpenClaw Integration

Plugin Installation

OpenViking has a native OpenClaw plugin for seamless integration:

# Install OpenClaw plugin
pip install openviking-openclaw

# Or from source
git clone https://github.com/volcengine/OpenViking
cd OpenViking/plugins/openclaw
pip install -e .

Configuration for OpenClaw

Add to your OpenClaw config:

# ~/.openclaw/config.yaml
memory:
  provider: openviking
  config:
    workspace: ~/.openviking/workspace
    tiers:
      l0:
        max_tokens: 4000
        auto_flush: true
      l1:
        max_tokens: 16000
        compression: true
      l2:
        max_tokens: 100000
        archive: true

Memory Tiers Explained

TierPurposeToken BudgetBehavior
L0Active working memory4K tokensAlways loaded, fast access
L1Frequently accessed16K tokensCompressed, on-demand
L2Archive/cold storage100K+ tokensSemantic search only

How Tiers Work

  1. New context goes to L0
  2. L0 fills → oldest items compressed to L1
  3. L1 fills → oldest items archived to L2
  4. Retrieval searches all tiers, returns relevant context

Directory Structure

~/.openviking/
├── ov.conf                 # Configuration
└── workspace/
    ├── memories/
    │   ├── sessions/        # L0: Active session memory
    │   ├── compressed/     # L1: Compressed memories
    │   └── archive/        # L2: Long-term storage
    ├── resources/          # Files, documents, assets
    └── skills/             # Skill-specific context

Usage Patterns

Adding Memory

from openviking import MemoryStore

store = MemoryStore()

# Add to L0
store.add_memory(
    content="User prefers Portuguese language responses",
    metadata={"tier": "l0", "category": "preference"}
)

# Add resource
store.add_resource(
    path="project_spec.md",
    content=open("project_spec.md").read()
)

Retrieving Context

# Semantic search across all tiers
results = store.search(
    query="user preferences",
    tiers=["l0", "l1", "l2"],
    limit=10
)

# Directory-based retrieval (more precise)
results = store.retrieve(
    path="memories/sessions/2026-03-16/",
    recursive=True
)

Compaction

# Trigger manual compaction
store.compact()

# View compaction status
status = store.status()
print(f"L0: {status.l0_tokens}/{status.l0_max}")
print(f"L1: {status.l1_tokens}/{status.l1_max}")

Best Practices

Memory Hygiene

  1. Categorize entries: Use metadata tags for better retrieval
  2. Flush L0 regularly: Let compaction run, don't hoard
  3. Use directory structure: Organize by project/topic
  4. Review L2 periodically: Archive stale memories

Token Efficiency

  1. Let OpenViking manage tiers automatically
  2. Use semantic search for L2 (don't load entire archive)
  3. Compress verbose content before adding to L1
  4. Keep L0 under 50% capacity for best performance

OpenClaw Workflow

  1. Session starts → OpenViking loads L0
  2. Conversation proceeds → context auto-promoted to L1/L2
  3. Long gaps → L2 provides relevant historical context
  4. Sessions compound → agent gets smarter over time

Troubleshooting

Common Issues

"No module named 'openviking'"

  • Ensure Python 3.10+ is active
  • Try pip install --user openviking

"Embedding model not found"

  • Check ov.conf has correct provider and model
  • Verify API key is valid

"L0 overflow"

  • Reduce l0.max_tokens in config
  • Manually call store.compact()

"Slow retrieval from L2"

  • Consider pre-loading frequently accessed resources to L1
  • Use directory-based retrieval for better precision

Resources

What Gets Better

After setup, your agent gains:

  1. Persistent memory across sessions
  2. Smarter retrieval with semantic + directory search
  3. Token efficiency with tiered loading
  4. Self-improvement as context accumulates
  5. Observable context with retrieval trajectories

The more your agent works, the more context it retains—without token bloat.

Files

4 total
Select a file
Select a file to preview.

Comments

Loading comments…