Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

YiHui CONTEXT MODE

v1.0.0

context-mode is an MCP server that saves 98% of your context window by sandboxing tool outputs. It routes large file reads, shell outputs, and web fetches th...

0· 40·0 current·0 all-time
by辉哥@1yihui

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for 1yihui/yihui-context-mode.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "YiHui CONTEXT MODE" (1yihui/yihui-context-mode) from ClawHub.
Skill page: https://clawhub.ai/1yihui/yihui-context-mode
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install yihui-context-mode

ClawHub CLI

Package manager switcher

npx clawhub@latest install yihui-context-mode
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name, description and SKILL.md tools (ctx_execute, ctx_index, ctx_fetch_and_index, etc.) are coherent for an MCP context-management server. The OpenClaw integration command (registering an MCP with a package) is consistent with the claimed functionality.
Instruction Scope
SKILL.md stays on-topic (how to use context-mode tools, when to call them, terse output style). However the runtime install instruction tells the agent to run 'openclaw mcp set ... npx -y context-mode' and to restart the gateway, which modifies agent configuration and will execute remotely fetched code. The instructions do not request unrelated files or secrets, but they do instruct privileged operations (registering an MCP, restarting gateway).
!
Install Mechanism
There is no formal install spec in the registry; instead SKILL.md instructs using npx to fetch and run 'context-mode' from the npm registry at runtime and does not pin a version. Using npx executes upstream package code dynamically (moderate-to-high risk). No verification, checksum, or trusted release host is enforced.
Credentials
The skill declares no required environment variables, binaries, or credentials, and the instructions do not ask for secrets. Network fetches and indexing are part of the described functionality and justify network access, but no credentials are requested.
Persistence & Privilege
always:false and model invocation are normal. The skill's recommended install will register an MCP command and restart the OpenClaw gateway, which results in persistent configuration changes to the agent runtime — expected for an MCP server but a privileged action that will run code supplied by the npm package.
What to consider before installing
This skill appears to do what it says, but it directs the agent to download and execute an unpinned npm package (via npx) and to modify/restart your OpenClaw gateway. That means arbitrary remote code will run on the host if you follow the install step. Before installing: review the npm package and its GitHub source (verify the repo, maintainers, and recent commits), prefer a pinned version or checksum, consider running it in an isolated environment or container, and ensure you’re comfortable with the agent restarting and registering an MCP service on your system.

Like a lobster shell, security has layers — review code before you run it.

latestvk97ff2fdx9qbgdrsrc01jf8yyn85q34s
40downloads
0stars
1versions
Updated 11h ago
v1.0.0
MIT-0

Context Mode

An MCP server that solves the context window problem in AI coding agents. It provides:

  1. Context Saving — sandbox tools keep raw data out of context window
  2. Session Continuity — SQLite + FTS5 for event tracking
  3. Think in Code — program analysis instead of reading files
  4. Output Compression — terse output format reducing tokens 65-75%

Available Tools

ToolWhen to UseToken Savings
ctx_batch_executeRun multiple commands + auto-search results90%+ vs raw exec
ctx_executeSingle script execution (JS/Python/Shell)90%+ vs raw exec
ctx_execute_fileRun code from a file, return only resulthigh
ctx_indexIndex docs/knowledge into searchable FTS5
ctx_searchSearch indexed content with BM25fast recall
ctx_fetch_and_indexFetch URL + index into knowledge base90%+ vs raw web fetch

Decision Rules

Use ctx_batch_execute instead of multiple exec/read calls when:

  • Analyzing multiple files at once
  • Counting/grepping across many files
  • Need command output + search results together

Use ctx_execute instead of reading files when:

  • User asks "how many lines/funcs/classes in X"
  • Need to compute something, not just read it

Use ctx_fetch_and_index instead of web_fetch when:

  • Researching a topic across multiple pages
  • Full raw content won't fit in context

Output Format

Terse. Drop filler, pleasantries, hedging.

❌ "So I ran a command to check the files and found that there are..."
✅ "Checked. 3 TypeScript files: src/index.ts (142 lines), src/cli.ts (89 lines)."

OpenClaw Integration

Install via MCP:

openclaw mcp set context-mode '{"command":"npx","args":["-y","context-mode"]}'
openclaw gateway restart

Comments

Loading comments...