Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Docs Feeder

v1.0.0

Automatically fetches comprehensive project documentation from built-in registries or URLs to assist AI agents in debugging and learning.

0· 702·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for zerone0x/docs-feeder.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Docs Feeder" (zerone0x/docs-feeder) from ClawHub.
Skill page: https://clawhub.ai/zerone0x/docs-feeder
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install docs-feeder

ClawHub CLI

Package manager switcher

npx clawhub@latest install docs-feeder
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
Name/description (fetch project docs) align with the code's main behavior (fetching /llms*.txt, fallback to GitHub README). However the bundled registry includes a 'local' entry (/usr/lib/node_modules/clawdbot/docs) and the code supports reading arbitrary local paths defined in the registry. Reading local files is not obviously necessary for a general 'docs feeder' unless the user explicitly configures local docs; bundling such a path in the registry is unexpected and broadens the skill's capability.
!
Instruction Scope
SKILL.md documents registry/local entries and usage, but runtime instructions and the code will: fetch any URL you pass (or guess patterns), follow redirects, and read local filesystem paths listed in docs-registry.json. That means an agent invoking this skill can request internal URLs (e.g., 169.254.169.254 or intranet hosts) or cause the skill to read local files if a registry entry points at them — both are outside the narrow notion of 'public documentation fetching' and can expose sensitive data.
Install Mechanism
This is an instruction-only skill with bundled scripts (no install spec). Nothing is downloaded at install time and no external installers are invoked. The risk surface comes from the scripts themselves, not from install-time downloads.
Credentials
The skill requests no environment variables or credentials (proportionate). However, it can access local paths (via registry.local) and arbitrary network endpoints provided by the user/agent — this is an implicit capability that doesn't require credentials but may access sensitive system metadata or internal services.
Persistence & Privilege
always is false and the skill does not modify other skills or system-wide agent settings. It writes only when the user passes --save (to /tmp) or if registry contains local paths; otherwise it outputs results to stdout. Autonomous invocation is allowed by default (platform default) — factor this in with the other concerns.
What to consider before installing
This skill will fetch documentation from arbitrary URLs and can read local file paths if they appear in docs-registry.json. Before installing or invoking it: 1) Inspect and remove any 'local' entries in docs-registry.json (for example the included /usr/lib/... path) so the skill cannot read host files you don't expect. 2) Treat it as untrusted network code: avoid letting it run autonomously in environments with access to internal networks or cloud instance metadata (it will try any URL you pass or guess common patterns). 3) If you must use it, require explicit user invocation only and run it in a sandboxed agent executor with restricted network access. 4) Do not pass internal IPs/hostnames or sensitive internal URLs as arguments. If you want to be safer, prefer fetching docs manually (or whitelist specific domains) and avoid the automatic URL-guessing behavior.

Like a lobster shell, security has layers — review code before you run it.

latestvk97csxrsgr27sq6nkv5rh5rmy181evkq
702downloads
0stars
1versions
Updated 18h ago
v1.0.0
MIT-0

Docs Feeder

Auto-fetch project documentation and feed it to your AI agent for debugging and learning.

Triggers

  • docs feed <project>
  • fetch docs <URL>

How It Works

  1. Registry Lookup — 50+ built-in projects (React, Next.js, Hono, Prisma, Anthropic, etc.)
  2. Fetch Priority:
    • /llms-full.txt → Full LLM-friendly docs
    • /llms.txt → Compact version
    • GitHub README → Fallback
  3. Smart Discovery — Unknown projects try common patterns (docs.xxx.com, xxx.dev)
  4. Size Warning — Alerts when docs exceed 500KB

Usage

# By project name (auto-lookup)
node fetch-docs.js nextjs

# By URL (direct fetch)
node fetch-docs.js https://docs.anthropic.com

# Raw content only (no metadata header)
node fetch-docs.js react --raw

# Save to file
node fetch-docs.js prisma --save

# List all supported projects
node fetch-docs.js --list

Built-in Registry

50+ projects including: React, Next.js, Vue, Svelte, Astro, Hono, Express, Fastify, NestJS, Prisma, Drizzle, tRPC, Zod, Tailwind CSS, shadcn/ui, TypeScript, Vite, Bun, Deno, Playwright, Vitest, Supabase, Stripe, Clerk, Anthropic, OpenAI, LangChain, Docker, Kubernetes, Terraform, Rust, Go, Python, FastAPI, Django, and more.

Edit docs-registry.json to add your own projects.

Registry Format

{
  "myproject": {
    "url": "https://myproject.dev",
    "llms": "/llms-full.txt",
    "github": "org/repo",
    "local": "/path/to/local/docs"
  }
}

Workflow

Fetch docs, then describe your problem:

→ node fetch-docs.js nextjs
→ [docs loaded into context]

"I'm getting a hydration mismatch error with App Router..."
→ [AI gives solution based on complete documentation]

Why This Works

Most modern doc sites ship /llms.txt or /llms-full.txt — a single file with the entire knowledge base formatted for LLMs. Instead of searching + reading + understanding docs manually, dump the whole thing into context and let the AI cross-reference.

Requirements

  • Node.js (no external dependencies)

Comments

Loading comments...