Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

X Deep Miner

v1.2.0

X (Twitter) 深度挖掘与归档 Skill。每小时自动扫描 AI/美股/生活类高热度推文(收藏>1000),自动翻译为专业中文文章,输出 Obsidian 格式。适用于构建个人知识库、每日情报简报。

0· 627·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description match the included script: scan X posts, translate, and output Obsidian notes. The implementation is largely scaffolding: browser-based scraping is described but not programmatically implemented (search_x_tweets returns empty results), and LLM translation is a TODO. Expectation mismatch: the skill claims automated hourly scans, but without an API or implemented browser automation it requires manual steps or an external browser tool. Also the default WORKSPACE_DIR is hard-coded to /Users/scott/.openclaw/workspace (overridable via env), which is odd but not functionally necessary.
Instruction Scope
SKILL.md and the script instruct the agent/user to run the Python script, set a cron job, and optionally use an OpenClaw managed browser or Nitter to fetch pages. The script only reads/writes files under the workspace/obsidian-output directories and a config JSON; it does not attempt to read unrelated system files or environment secrets. The instructions are somewhat vague about actual browser automation and where network fetches would occur, giving the agent broad operational discretion if browser automation were added.
Install Mechanism
No install spec; the skill is instruction-plus-a-script only. Nothing is downloaded or written by an installer. Risk is limited to running the included Python script locally.
Credentials
The skill declares no required env vars or credentials, which aligns with its current (incomplete) implementation. References note possible future requirements (X API key, LLM API) but these are not required now; be aware that enabling those features later would require adding secrets. The default WORKSPACE_DIR value hardcodes a specific user path which may be unexpected; the script will create and write files there unless you set WORKSPACE_DIR to a different location.
Persistence & Privilege
always:false (default). The skill writes notes and a config JSON under the workspace directory but does not modify other skills or system-wide agent settings. It does not request permanent or elevated privileges.
Assessment
This skill is mostly a scaffold that matches its description but is not fully automated: it prints instructions for using a managed browser and lists LLM/X API integration as TODOs. Before installing/running: 1) Inspect scripts/x_deep_miner.py yourself (you already have it); running it will create files under WORKSPACE_DIR (default /Users/scott/.openclaw/workspace) — set WORKSPACE_DIR to a safe directory you control. 2) Use the 'test' command first to see behavior without saving. 3) If you enable browser automation, prefer an isolated browser/profile (avoid using your logged-in browser profile because automation can capture session data/cookies). 4) Be cautious if you later supply X API keys or LLM API credentials — review any added code that uses them. 5) If you need fully automated scraping at scale, expect to provide credentials or extend/complete the browser automation code; the current package alone will not perform automatic network scraping. If any of this is surprising (automatic scraping, file writes), do not run it until you adjust paths and confirm desired behavior.

Like a lobster shell, security has layers — review code before you run it.

latestvk97bs3pq4k3ar3ps0e1y41021d818rvd

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments