Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Scientific Internet Access

v1.7.1

AI-powered Scientific Internet Access engine for OpenClaw. AI驱动的科学上网术——你的私人科学上网管家。 自动抓取免费节点、测速、筛选,一步步引导小白完成配置。 官网: https://shadowrocket.ai 推荐搭配Claude模型使用,指令遵...

0· 819·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (auto-fetch free proxy nodes, test, filter, and guide configuration) lines up with the included scripts: scraper.py (fetch public subscription URLs), tester.py (TCP tests), formatter.py (output), and handler.py (orchestration). Minor mismatch: README claims scheduled scraping/health checks (every 2 hours/30 minutes) but no scheduler or background service is present in the repository — those behaviors would require external scheduling. The scripts also use OPENCLAW_WORKSPACE (environment variable) with a default path; that env var wasn't declared in the registry metadata but is harmless for functionality.
!
Instruction Scope
SKILL.md explicitly forces the agent to run the bundled handler.py (from ~/.openclaw/skills/...) and to reply only with the script output. handler.py runs scraper.py and tester.py which: (a) fetch many public URLs (GitHub raw content) and (b) attempt TCP connections to up to dozens of arbitrary external servers/ports to measure latency. This is necessary for the stated purpose but gives the skill effective control to perform network IO and active probing. The instructions also suppress troubleshooting prompts and tightly constrain agent behavior, which increases the chance the agent will silently perform network scans without further user confirmation.
Install Mechanism
There is no formal install spec in the registry (the package is 'instruction + code' already bundled). README suggests git clone or clawhub install but the platform install details are absent. No remote binary downloads or obscure URLs are used by the tool itself; the code is included and readable, so installation risk is primarily the usual: running provided Python scripts.
Credentials
The skill requires no API keys or secrets. It does read OPENCLAW_WORKSPACE and optionally MAX_TEST_NODES from the environment (both with sensible defaults) — these env vars were not listed in registry metadata but are not credentials. However, the skill writes scraped node entries (which can include passwords, UUIDs, etc.) to ~/.openclaw/workspace nodes_raw.json and nodes_tested.json; those files contain potentially sensitive connection strings and should be treated as secrets. Requesting no credentials is proportionate, but the storage of raw node credentials on disk is a privacy/security consideration.
Persistence & Privilege
always:false (no forced inclusion). The skill does not request system-wide config changes and only writes to its own workspace files. It does not modify other skills or agent settings. The SKILL.md offers a 'subscribe' prompt but no automated scheduler is implemented in the code, so there is no built-in persistent background agent activity beyond what the user runs.
What to consider before installing
What to consider before installing: - The scripts will actively fetch many public subscription URLs and then attempt TCP connections to dozens of scraped hosts/ports to measure latency. This is required for the feature but can look like port scanning to network monitors and may trigger alerts or be disallowed by your environment. - Scraped node entries often contain raw connection strings (passwords, UUIDs, server addresses). The skill stores these in ~/.openclaw/workspace/nodes_raw.json and nodes_tested.json — treat those files as sensitive and remove them if you stop using the skill. - No credentials are requested by the skill, but verify the code yourself (it is small and included). If you are not comfortable reviewing the Python, run the skill in an isolated/sandboxed environment (VM/container) with restricted outbound network egress first. - If you proceed: consider setting MAX_TEST_NODES to a small number (via env var) to limit how many hosts are probed, and/or change OPENCLAW_WORKSPACE to a sandbox path. Inspect the SOURCES list in scraper.py to confirm you accept the GitHub sources it will fetch from. - Legal/regulatory note: bypassing network censorship or using third-party proxy nodes may be illegal or against local policy in some jurisdictions — check applicable laws and your organization’s policies before using the skill. If you want, I can (1) walk through the code line-by-line and flag any specific lines of concern, (2) suggest a minimal set of env vars to limit scanning, or (3) produce a sanitized version that omits writing raw credentials to disk.

Like a lobster shell, security has layers — review code before you run it.

latestvk97ebrnqkg78y83nq59jh62d9982223g

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments