Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Browse website - Crawls sites automatically and mounts pages as markdown files you can grep, diff, cat, and explore with standard Unix commands — over SSH or HTTP

v1.0.0

Turn any website into a filesystem. Crawls sites automatically and mounts pages as markdown files you can grep, diff, cat, and explore with standard Unix com...

0· 0·0 current·0 all-time
byBigmind@bigmindai
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's described purpose (turn a website into a filesystem) matches the runtime approach: it delegates crawling/mounting to openobj.com and exposes filesystem-like commands. However, the SKILL.md relies entirely on a remote service rather than providing local mounting code, which may be surprising to users who expect a local implementation. The remote dependency is reasonable for the described feature but should be explicit in registry metadata.
!
Instruction Scope
The instructions tell the agent to run ssh {domain}@openobj.com "{command}" or POST JSON containing the site and command to https://openobj.com/exec. That means any substituted {command} or {domain} (including data or context the agent inserts) will be transmitted to the third party. This creates a realistic risk of exfiltrating sensitive input, internal hostnames, or secrets. The doc also instructs falling back to the HTTP API if SSH is blocked, again sending plaintext JSON to a remote endpoint. The skill does not limit or sanitize what the agent may include in commands, and it references a required network permission inside SKILL.md that is not expressed in the registry metadata.
Install Mechanism
No install step or code is provided; the skill is instruction-only so nothing is written to disk by an install procedure. This is low technical risk but increases reliance on the remote service described in the instructions.
Credentials
The registry declares no environment variables or credentials, which is consistent with an instruction-only skill. However, the SKILL.md repeatedly requires 'full_network' access to openobj.com; that required permission is only present in the instructions (comments) and not in the registry metadata — an inconsistency. Also, although no env vars are requested, the runtime pattern (sending arbitrary commands/site names) can still leak secrets if the agent includes them in requests.
Persistence & Privilege
The skill does not request always:true, does not include install hooks or config file changes, and is user-invocable. It does not request permanent platform presence or elevated privileges in the manifest.
Scan Findings in Context
[no-code-files-scan] expected: The static regex scanner found no code files to analyze. This is expected because the skill is instruction-only and all behavior is expressed in SKILL.md; absence of findings is not evidence of safety.
What to consider before installing
This skill simply instructs the agent to send your requested domain name and any command text to a third party (openobj.com) over SSH or HTTPS. Before installing or using it, consider: (1) Do you trust openobj.com to receive raw commands and site names? Anything the agent places into {command} or {domain} (including parts of your prompt or local context) can be transmitted to that host. (2) Do not use it with internal/private domains or include secrets, API keys, or file contents in commands — they could be exfiltrated. (3) The SKILL.md requests 'full_network' access but the registry metadata does not declare this — ensure your agent's policy requires explicit user consent for network access. (4) Crawling triggers credit costs per page; follow the doc's advice to prefer cached reads to avoid unexpected charges. If you need stronger safety: request the skill's source, run it in an isolated environment, or prefer a skill that performs crawling locally or on a trusted, auditable infrastructure.

Like a lobster shell, security has layers — review code before you run it.

latestvk97dvnz0m325v6jy6x68f4y1dx8434tk

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Browse Website

Use OpenObj to access any website's content. Pages are automatically crawled, converted to markdown, and mounted as a virtual filesystem you can explore with standard Unix commands.

Network Access

All commands in this skill require network access to openobj.com.

Use required_permissions: ["full_network"] for all SSH and curl commands (they need to reach openobj.com).

How to Use

Preferred: SSH exec mode (works in terminals with network access):

# required_permissions: ["full_network"]
ssh {domain}@openobj.com "{command}"

Fallback: HTTP API via curl (works in sandboxed environments):

# required_permissions: ["full_network"]
curl -s -X POST https://openobj.com/exec \
  -H 'Content-Type: application/json' \
  -d '{"site": "{domain}", "command": "{command}"}'

Important: If SSH fails due to sandbox restrictions, fall back to the curl HTTP API. Both methods return the same results.

Available Commands

CommandDescription
find /site -type fList all indexed pages
cat {path}Read a page's markdown content
grep -rl '{term}' /siteFind pages containing a term
grep -r '{term}' /siteSearch with matching lines
ls {path}List files in a directory
head -n 20 {path}Read first N lines
wc -l {path}Count lines in a file
git log --onelineView crawl history
git diff HEAD~1See what changed in last crawl
git show {hash}View a specific crawl's changes
openobj rediscoverForce a fresh re-crawl

Examples

Via SSH

# required_permissions: ["full_network"]
ssh docs.stripe.com@openobj.com "find /site -type f"
ssh docs.stripe.com@openobj.com "grep -rl 'webhook' /site"
ssh docs.stripe.com@openobj.com "cat /site/docs/webhooks.md"

# Change tracking
ssh docs.stripe.com@openobj.com "cd /site && git log --oneline"
ssh docs.stripe.com@openobj.com "cd /site && git diff HEAD~1"

# Force re-crawl and see what changed
ssh docs.stripe.com@openobj.com "openobj rediscover && cd /site && git diff HEAD~1"

Via HTTP API (curl)

# required_permissions: ["full_network"]
# List all pages
curl -s -X POST https://openobj.com/exec \
  -H 'Content-Type: application/json' \
  -d '{"site": "docs.stripe.com", "command": "find /site -type f"}'

# Search for a term
curl -s -X POST https://openobj.com/exec \
  -H 'Content-Type: application/json' \
  -d '{"site": "docs.stripe.com", "command": "grep -rl webhook /site"}'

# Read a page
curl -s -X POST https://openobj.com/exec \
  -H 'Content-Type: application/json' \
  -d '{"site": "docs.stripe.com", "command": "cat /site/docs/webhooks.md"}'

Workflow

  1. Discover — Run find /site -type f to see all available pages
  2. Search — Use grep -rl '{keyword}' /site to find relevant pages
  3. Read — Use cat {path} to read the full content of a page
  4. Refine — Use grep -r '{term}' {path} to search within specific files
  5. Track changes — Use git log and git diff to see what changed across crawls
  6. Re-crawl — Use openobj rediscover to force a fresh crawl and update pages

Behavior

  • First access to a domain triggers an automatic crawl (may take 10-30 seconds)
  • Subsequent accesses use the cached version (refreshed every 24 hours)
  • Use openobj rediscover to force a fresh crawl before the 24h window
  • Pages are converted from HTML to markdown automatically
  • Up to 200 pages per site are indexed
  • The virtual filesystem mirrors the site's URL structure
  • Each crawl is tracked as a git commit for change diffing

Credits

  • Crawling a new site or running openobj rediscover costs 1 credit per page
  • Reading cached content (cat, grep, find, ls, git) is always free
  • Free accounts get 100 one-time credits
  • If you get a credit limit error, do not retry — inform the user:
    • To check credits: ssh {any-domain}@openobj.com "openobj credits"
    • To upgrade: tell the user to run ssh auth@openobj.com in their terminal
  • Prefer reading cached sites over re-crawling to conserve credits

Response Format

The HTTP API returns JSON:

{
  "stdout": "...",
  "stderr": "...",
  "exitCode": 0
}

Use the stdout field for the command output. A non-zero exitCode indicates an error.

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…