Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

rocky-know-how

v3.0.1

Learning knowledge skill v3.0.0 — Full Auto-Closed-Loop experience system. 4-event Hook integration (bootstrap/compaction/reset), LLM dual-judgment, triple-l...

0· 209·1 current·1 all-time
byRocky.Tian@rockytian-top

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for rockytian-top/rocky-know-how.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "rocky-know-how" (rockytian-top/rocky-know-how) from ClawHub.
Skill page: https://clawhub.ai/rockytian-top/rocky-know-how
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: bash
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install rockytian-top/rocky-know-how

ClawHub CLI

Package manager switcher

npx clawhub@latest install rocky-know-how
Security Scan
Capability signals
Requires OAuth tokenRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's code and scripts match the described purpose: hooks for before/after compaction, LLM-based dual judgment, and local triple-layer storage. However, the handler reads the agent's OpenClaw configuration and auth-profiles to locate provider base URLs and tokens (resolveProviderInfo), which is not declared in the registry metadata. Reading agent-config/auth files is plausible for a skill that calls LLMs, but it is a capability that should be explicitly documented.
!
Instruction Scope
SKILL.md and handler.js explicitly inject reminders into the systemPrompt on agent:bootstrap and the pre-scan flagged 'system-prompt-override'. Modifying the system prompt is a powerful action (can alter model behavior, persist instructions, or influence subsequent decisions). The runtime also automatically runs scripts that read/write ~/.openclaw/.learnings, spawn curl to LLM endpoints, and run background auto-review that may autonomously append/write experiences. Those behaviors go beyond passive helpers and grant broad influence over agent behavior and persistent state.
Install Mechanism
No network downloads or third-party package installs are performed by an install spec; the skill is delivered as code plus scripts. That reduces supply-chain risk. The code runs local shell scripts and node code (handler.js) and executes curl via execSync — expected for this functionality.
!
Credentials
The registry declares no required env vars, yet handler.js reads ~/.openclaw/openclaw.json and ~/.openclaw/agents/.../auth-profiles.json to obtain provider configuration and OAuth tokens. It will call provider endpoints using those tokens. While using the agent's configured LLM provider is functionally reasonable, the skill does this without declaring it requires access to those files or credentials. The code also passes API keys/tokens to curl on the command line (process-visible), which is an implementation risk for credential exposure.
!
Persistence & Privilege
always:false and user-invocable:true are reasonable, but the skill registers hook handlers that run on agent lifecycle events and will autonomously call LLMs and write to ~/.openclaw/.learnings/drafts/pending/experiences.md. The combination of (a) the ability to inject into systemPrompt at bootstrap and (b) autonomous background processing (auto-review, promote/demote) increases the blast radius if you do not trust the code. The skill does not modify other skills' configs, but it does create and manage persistent data in the user's OpenClaw home.
Scan Findings in Context
[system-prompt-override] expected: The skill explicitly injects reminders into the systemPrompt on agent:bootstrap, which matches the detection. That capability is functionally plausible for an 'experience reminder' feature, but it's high-risk because system-prompt modifications can change agent behavior and be used for prompt-injection attacks.
What to consider before installing
This skill implements an automated, persistent learning loop that (a) inspects your agent configuration to find model providers and tokens, (b) calls those LLM endpoints on its own, (c) writes and archives content under ~/.openclaw/.learnings, and (d) injects text into the agent's system prompt at bootstrap. These are coherent with an automated knowledge-capture tool, but they are sensitive capabilities. Before installing: 1) Review handler.js's resolveProviderInfo (it reads ~/.openclaw/openclaw.json and auth-profiles.json) and confirm you are comfortable the skill can use your agent's provider tokens. 2) If you want to limit risk, disable autonomous invocation (platform option) or run the skill in a disposable/sandbox workspace first. 3) Audit the provider.baseUrl entries in your openclaw config to ensure the skill won't contact unexpected endpoints; the code uses curl and will send your provider token in an Authorization header (note: token appears on the command line during execution). 4) If you do not want systemPrompt changes, do not enable the bootstrap hook or modify the hook to only log rather than inject. 5) Consider running the scripts manually with a non-production user, inspect produced files under ~/.openclaw/.learnings, and back up your existing OpenClaw config before enabling the skill. If you lack the ability to audit or sandbox this code, treat it as high-risk and avoid installing on sensitive or production agents.
hooks/openclaw/handler.js:203
Shell command execution detected (child_process).
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

📚 Clawdis
OSmacOS · Linux
Binsbash
latestvk978xpgkjdez1zg2x3nygy8z0h85egec
209downloads
0stars
38versions
Updated 3h ago
v3.0.1
MIT-0
macOS, Linux

rocky-know-how v3.0.0

Experience & Knowledge Auto-Learning System for OpenClaw Agents

让 AI Agent 从失败中学习,在成功后自动记录,形成经验闭环。


🎯 When to Use / 适用场景

Scenario 场景Action 操作
Failed 2+ times 失败≥2次bash scripts/search.sh "关键词"
Solved after failures 解决后bash scripts/record.sh "问题" "踩坑" "方案" "预防" "tags" "area"
Auto on compaction 压缩时Fully automatic — Hook extracts context → LLM judges → writes
Auto on reset 重置时Fully automatic — Hook saves pending context
Tag used 3x/7days 晋升bash scripts/promote.sh
Tag unused 30days 降级bash scripts/demote.sh

🏗️ Architecture / 架构

Triple-Layer Storage / 三层存储

~/.openclaw/.learnings/
├── experiences.md    ← Main data (v1 compatible, all experiences)
├── memory.md         ← HOT layer (≤100 lines, always loaded)
├── domains/          ← WARM layer (by area: infra, code, wx...)
│   ├── infra.md
│   ├── code.md
│   └── global.md
├── projects/         ← WARM layer (by project)
├── archive/          ← COLD layer (90+ days)
├── drafts/           ← Auto-generated drafts (LLM judged)
├── pending/          ← Session context (before processing)
└── corrections.md    ← Correction log (auto-dedup)

4-Event Hook Integration / 四事件驱动

Event 事件Trigger 触发Function 功能
agent:bootstrapAgent startup 启动Inject experience reminder into systemPrompt
before_compactionBefore context compression 压缩前Extract task/tools/errors → save to pending/ + auto-search
after_compactionAfter compression 压缩后Core: LLM dual-judge → draft → create/append → archive
before_resetBefore session reset 重置前Save context as pending (fallback)

🔄 Full Auto-Closed-Loop / 全自动闭环

Agent Session (对话中)
    │
    ▼
before_compaction (handler.js:978)
    ├─ extractContextFromMessages() → task, tools, errors
    ├─ savePendingLearnings() → pending/*.json
    └─ autoSearch() → inject related experiences
    │
    ▼
after_compaction (handler.js:1014)
    ├─ resolveProviderInfo() → get LLM provider (with OAuth support)
    │
    ├─ processPendingItem() (handler.js:678)
    │   │
    │   ├─ No provider → Keyword fallback
    │   │   ├─ Similar found → append-record.sh
    │   │   └─ No similar → record.sh
    │   │
    │   └─ Has provider → LLM Dual-Judgment
    │       ├─ callLLMJudge() → worth=true?
    │       │   └─ Yes → writeDraftWithJudge()
    │       └─ decideCreateOrAppend()
    │           ├─ append → append-record.sh
    │           └─ create → record.sh
    │
    ├─ runAutoReview() → background audit
    ├─ Archive pending → pending/archive/
    └─ Cleanup temp files
    │
    ▼
before_reset (handler.js:1095)
    └─ savePendingLearnings() → fallback save

🛡️ Safety & Security / 安全机制

Mechanism 机制Implementation 实现
Regex injection prevention 正则注入防护escape_grep() sed escaping
Path traversal filtering 路径穿越过滤replace(/[^a-zA-Z0-9_-]/g, '')
Concurrent write lock 并发写锁.write_lock/ directory atomic lock
Tag dedup promotion Tag去重晋升record.sh dedup + promote.sh threshold
Graceful degradation 降级容错LLM → keyword → write fallback chain

📊 Scripts Reference / 脚本说明

Script 脚本Lines 行数Function 功能
handler.js1,110Core hook handler (4 events, LLM integration)
search.sh539Search experiences (keyword / preview / all)
record.sh476Write new experience (with dedup & lock)
demote.sh371Demote HOT tags to WARM
compact.sh348Compress layers when exceeding limits
clean.sh247Remove test/invalid entries
vectors.sh232Vector search via LM Studio embeddings
promote.sh185Promote WARM tags to HOT (≥3x/7days)
import.sh172Import experiences from other sources
archive.sh167Archive old experiences to COLD
install.sh161Install skill to workspace
stats.sh153Show statistics dashboard
auto-review.sh136Auto-review pending drafts
append-record.sh100Append solution to existing experience
summarize-drafts.sh80Summarize and process drafts
update-record.sh77Update existing experience
common.sh41Shared utility functions
uninstall.sh37Remove skill
Total 共计4,632

✅ Verified Testing / 测试验证

Models Tested / 已测试模型

Model 模型ProviderForward Test 正向Reverse Test 逆向Status
deepseek-v4deepseek (api-key)✅ Pass✅ Pass (144/150)Verified
glm-5.1zai (api-key)✅ Pass✅ Pass (146/150)Verified
MiniMax-M2.7-highspeedminimax-portal (OAuth)✅ Pass✅ Pass (146/150)Verified

Test Coverage / 测试覆盖

Test 测试Result 结果
agent:bootstrap → systemPrompt injection✅ 12→952 chars
before_compaction → pending save✅ task/tools/errors extracted
after_compaction → LLM dual-judge → write✅ EXP auto-created
before_reset → fallback save✅ pending saved
record.sh write + search✅ Write & find
auto-review.sh process draft✅ Draft → archive
compact.sh dry-run✅ All layers healthy
promote.sh tag promotion✅ Threshold check
stats.sh dashboard✅ Full panel
5 safety mechanisms✅ All present

🚀 Installation / 安装

# Clone
git clone https://gitee.com/rocky_tian/skill.git
cd skill/rocky-know-how

# Install
bash scripts/install.sh

# Verify
bash scripts/stats.sh

📈 Key Advantages / 核心优势

  1. Zero-config auto-learning — Hook events automatically capture experience, no manual trigger needed
  2. LLM dual-judgment — First judges if worth saving, then decides create vs append
  3. Triple degradation — LLM → keyword → write, never loses data
  4. Multi-provider — Supports OpenAI, Anthropic, OAuth providers (zai/stepfun/minimax)
  5. Production proven — 45+ real experiences, 2.6MB data, stable operation
  6. Safety first — 5 security mechanisms (regex injection, path traversal, write lock, etc.)

Version: 3.0.0 | Tested: 2026-04-24 | License: MIT

Comments

Loading comments...