Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Nirvana Skill

v1.0.0

Privacy-preserving context stripper for OpenClaw. Strip SOUL/USER/MEMORY before cloud API calls. Assumes you have your own local LLM. Saves 85%+ tokens, prot...

0· 13·0 current·0 all-time
byShiva&G@shivaclaw

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for shivaclaw/project-nirvana-skill.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Nirvana Skill" (shivaclaw/project-nirvana-skill) from ClawHub.
Skill page: https://clawhub.ai/shivaclaw/project-nirvana-skill
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install shivaclaw/project-nirvana-skill

ClawHub CLI

Package manager switcher

npx clawhub@latest install project-nirvana-skill
Security Scan
Capability signals
CryptoCan make purchases
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name and description (privacy-preserving context stripper using a local LLM) align with the SKILL.md content: it describes local-first routing, stripping SOUL/USER/MEMORY, and cloud fallback. However the skill is instruction-only (no code, no declared repo/homepage) yet the README shows CLI install/config commands referencing a repo ('shivaclaw/nirvana-local') and edits to user config — the lack of provenance (source unknown/homepage none) is notable though not impossible for this purpose.
!
Instruction Scope
The runtime instructions tell the agent/operator to strip system prompts and user/memory files and to edit ~/.openclaw/workspace/openclaw.json; they also promise logging/auditing and local caching. Because this is instruction-only, the skill relies on natural-language directions to modify system prompt handling and to read/write local files (SOUL.md, USER.md, MEMORY.md, chat history). That gives broad discretion to the agent and is a high-impact operation (manipulating what is or isn't sent to the cloud). The pre-scan flagged a 'system-prompt-override' pattern — changing or instructing changes to system prompts is exactly the sensitive behavior the scanner warned about.
Install Mechanism
There is no install spec and no code files; this is instruction-only, which minimizes direct install-time risk (nothing is downloaded or executed by the skill package itself). The SKILL.md includes example CLI commands for users, but the registry metadata shows no verified source or install artifact to fetch automatically.
!
Credentials
Declared requirements list no env vars or config paths, but the instructions explicitly reference and instruct modification of a user config file (~/.openclaw/workspace/openclaw.json) and local files (SOUL.md, USER.md, MEMORY.md, chat history). That discrepancy (no required config paths declared but SKILL.md expects access to them) and the implicit need to read/write sensitive local files is disproportionate to an unverified, instruction-only skill without provenance.
Persistence & Privilege
The skill does not request 'always: true' and uses default autonomous invocation settings, which is normal. It instructs enabling the plugin in the user's OpenClaw config and to cache/cloud-fallback behavior — those are reasonable for a plugin of this type but should be performed only after the user verifies the implementation. There is no evidence the skill attempts to modify other skills or global agent settings beyond its own plugin entry.
Scan Findings in Context
[system-prompt-override] expected: The skill's explicit goal is to change what system prompt/context is forwarded to cloud APIs, so detection of system-prompt related patterns is expected. However, system-prompt override is a high-risk action because it can be used maliciously to alter agent behavior; since this skill is instruction-only and has no verifiable source code, the pattern increases suspicion.
What to consider before installing
This skill claims to improve privacy by stripping agent identity and user memory before cloud calls, which is a sensitive operation done by language instructions rather than verified code. Before installing or enabling it: 1) Ask for the source repository or signed package and review the implementation so you can confirm the stripping is performed locally and cannot be subverted. 2) Do not enable the plugin system-wide until you can inspect how it reads/writes SOUL.md, USER.md, MEMORY.md and where audit logs are stored (ensure logs do not re-include private data and are stored locally/encrypted). 3) Test with non-sensitive queries in an isolated environment to confirm cloud calls contain only sanitized queries. 4) Prefer a version with explicit code you can audit or an official release on a trusted registry; avoid relying solely on natural-language instructions to perform privileged operations. If the publisher cannot provide verifiable source or implementation details, treat the skill as high risk and do not enable it for sensitive data.
!
SKILL.md:59
Prompt-injection style instruction pattern detected.
About static analysis
These patterns were detected by automated regex scanning. They may be normal for skills that integrate with external APIs. Check the VirusTotal and OpenClaw results above for context-aware analysis.

Like a lobster shell, security has layers — review code before you run it.

latestvk97d6zwefs64zy9fyg8tkqr5r185e0pp
13downloads
0stars
1versions
Updated 4h ago
v1.0.0
MIT-0

Nirvana Local: Privacy-Preserving Context Stripper

For teams that already have a local inference engine. Just strip the private data before cloud API calls. Nothing else.


What This Is

Nirvana Local is a lightweight skill for OpenClaw agents that already have a local LLM inference engine (Ollama, Llamafile, vLLM, LM Studio, etc.).

It does one thing: strips private context before cloud API calls.

  • ✅ Removes SOUL.md (agent identity)
  • ✅ Removes USER.md (user personal data)
  • ✅ Removes MEMORY.md (agent memories)
  • ✅ Removes chat history (your actual questions)
  • ✅ Leaves only the task at hand
  • ✅ Logs every boundary crossing (audit trail)

Why You Need This

Today: Privacy Leak

When you ask your agent a question, the full system prompt goes to cloud:

Your question: "How do I build synbio?"

System prompt sent to OpenAI:
- Agent identity (SOUL.md)
- Your personal info (USER.md)
- Agent memories (MEMORY.md)
- Full chat history
- Everything costs 2,000–5,000 extra tokens

With Nirvana Local: Protected

Your question: "How do I build synbio?"

Query to local LLM first:
- Try local inference (free)
- If local fails, ask the cloud

Cloud API call (if needed):
- Original question: [STRIPPED]
- System prompt: [STRIPPED]
- Sanitized query: "How do I build synbio?" (no context)
- Cloud never sees: SOUL, USER, MEMORY, chat history
- Cost: $0.01–$0.03 (no context overhead)

Installation

Prerequisites

  • OpenClaw 2026.3.24+
  • Your own local LLM running at any endpoint (Ollama, vLLM, LM Studio, etc.)

Setup (3 minutes)

# 1. Install skill
clawhub install shivaclaw/nirvana-local

# 2. Configure your local LLM endpoint
openclaw nirvana-local configure \
  --local-endpoint http://localhost:11434 \
  --local-model qwen2.5:7b

# 3. Verify
openclaw nirvana-local status

# Output:
# ✅ Local LLM: qwen2.5:7b @ localhost:11434
# ✅ Privacy audit: enabled
# ✅ Context stripper: active

How It Works

Routing Decision Logic

Agent receives your question
    ↓
Try local LLM first
(qwen2.5:7b, Mistral, Llama, whatever you have)
    ↓
┌─────────────────────────────────────────┐
│ Success?                                 │
└─────────────────────────────────────────┘
   ↙ YES (80%)           ↘ NO (20%)
   
Return local answer    Ask cloud for help
                             ↓
                    Strip private context
                    (SOUL, USER, MEMORY)
                             ↓
                    Sanitized query to cloud
                    "How do I build synbio?"
                    (no personal data)
                             ↓
                    Cache response locally
                    Agent learns
                             ↓
                    Return integrated answer

What Gets Stripped

Always Removed:

  • SOUL.md (agent identity)
  • USER.md (personal data)
  • MEMORY.md (agent memories)
  • Chat history (your actual questions)
  • Session context (private workstreams)

What the Cloud Gets:

  • Sanitized query only
  • Task-specific information
  • Audit trail (transparent logging)

Configuration

Basic Setup

Edit ~/.openclaw/workspace/openclaw.json:

{
  "plugins": {
    "nirvana-local": {
      "enabled": true,
      "local_llm": {
        "endpoint": "http://localhost:11434",
        "model": "qwen2.5:7b",
        "timeout_ms": 180000,
        "api_format": "openai-compatible"
      },
      "privacy": {
        "strip_soul": true,
        "strip_user": true,
        "strip_memory": true,
        "strip_chat_history": true,
        "audit_logging": true
      },
      "routing": {
        "local_threshold": 0.75,
        "max_local_context_tokens": 8000,
        "cloud_fallback": true
      }
    }
  }
}

Custom API Format

If your local LLM uses a different API:

{
  "plugins": {
    "nirvana-local": {
      "local_llm": {
        "endpoint": "http://your-server:5000",
        "model": "your-model",
        "api_format": "custom",
        "custom_api_handler": "llamafile"  // or "vllm", "lm-studio", etc.
      }
    }
  }
}

Privacy Audit Trail

View What Gets Stripped

# See every boundary crossing
openclaw nirvana-local audit-log --tail 20

# Output:
# [2026-04-24 14:23:45] LOCAL HANDLING
# Question: "What's my salary range for synbio roles?"
# Handled by: qwen2.5:7b locally
# Private data: None exposed
# Cost: $0

# [2026-04-24 14:25:12] CLOUD FALLBACK (WITH STRIPPING)
# Original question: [STRIPPED]
# Sanitized query sent: "What are typical salary ranges in synthetic biology?"
# Private data stripped: SOUL.md, USER.md, chat history
# Cost: $0.02

Transparency

Every cloud API call is logged with:

  • What was stripped
  • What was sent
  • What was cached
  • Cost incurred
  • Privacy boundary verified

Supported Local LLMs

ProviderEndpointAPI FormatTested
Ollamahttp://localhost:11434openai-compatible
Llamafilehttp://localhost:8000openai-compatible
vLLMhttp://localhost:8000openai-compatible
LM Studiohttp://localhost:1234openai-compatible
Text Generation WebUIhttp://localhost:5000custom
GPT4Allhttp://localhost:4891custom
LocalAIhttp://localhost:8080openai-compatible

Philosophy

You own the learning. The cloud provides intelligence.

Without Nirvana Local:

  • Cloud provider learns from your private data every time you ask a question
  • You train their next model
  • Your personal information becomes their training corpus
  • You pay for the privilege

With Nirvana Local:

  • Your local agent learns from cloud responses
  • Your private data never leaves your system
  • Cloud provider learns nothing about you
  • You own all the knowledge

Cost Savings

Example: 10 Questions/Day

Today (without Nirvana Local):

  • 2,000 tokens/question (full context sent)
  • 20,000 tokens/day
  • $0.60/day (OpenAI GPT-4)
  • $18/month
  • Privacy: Compromised

With Nirvana Local:

  • 80% local (free, private)
  • 20% cloud (sanitized, no context overhead)
  • 300 tokens/question average
  • 3,000 tokens/day
  • $0.09/day
  • $2.70/month
  • Privacy: Protected

Savings: $15.30/month + 100% privacy protection


When to Use

✅ Perfect For

  • Agents with local LLMs already running
  • Privacy-critical deployments (code, healthcare, legal, finance)
  • Cost-conscious teams (85% savings)
  • Air-gapped environments (local + selective cloud)

⚠️ When to Use Full Plugin

  • Need automated Ollama + model setup
  • No local LLM currently available
  • Want out-of-box simplicity

Support


License

MIT-0 — Free to use, modify, and redistribute. No attribution required.


Your privacy is yours to keep. Nirvana Local makes it happen.

Comments

Loading comments...