Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
商业捡漏预警虾
v1.0.0商业捡漏预警虾 — 商业信息差的极速捕手。监控多平台(闲鱼、链家、阿里拍卖、政府采购网等),第一时间发现低价房产、优质二手货、招标机会,并推送飞书预警。 当以下情况时使用此 Skill: (1) 用户要求监控某平台的低价商品、房产或招标信息 (2) 用户要求设置价格预警阈值(如"低于市场价30%就通知我") (3...
⭐ 0· 89·0 current·0 all-time
byRicky@tujinsama
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name and description claim multi-platform monitoring and Feishu alerts, and included files (monitor.py + platform/risk/valuation docs) implement a demo scanner and filtering/valuation logic. However the SKILL declares no required credentials or environment variables even though real operation would typically require platform login state (e.g., 闲鱼 needs login) and messenger webhook/credentials for Feishu. The SKILL.md also repeatedly states default Feishu push but the package declares no Feishu token or config, so the claimed end-to-end behavior is not fully realized by the declared requirements.
Instruction Scope
Runtime instructions are specific: build rule JSON, run the provided monitor.py pointing at a rules file or single rule, read the included references for valuation/risk, and optionally schedule with cron. The instructions only reference files bundled with the skill or a per-skill data directory under the user's home; they do not instruct wide-ranging system access (no attempt to read shell history, SSH keys, or unrelated config). They do recommend replacing demo collectors with real HTTP/Selenium scrapers which will require storing credentials and possibly cookies — that is an operational note rather than covert scope creep.
Install Mechanism
No install spec (instruction-only + shipped script) — low install risk. However the SKILL and monitor.py comment that real deployment will require Selenium / HTTP scraping and presumably additional Python dependencies (requests, selenium, browser driver), but those are not declared. The lack of an install spec means dependencies would be installed manually by the operator, increasing deployment friction but not directly a security red flag.
Credentials
The skill requests no environment variables or credentials, yet real usage logically requires platform login state (cookies, credentials) to scrape some sites and a Feishu webhook or token to push notifications. The script does accept an optional DEAL_HUNTER_DATA env var for the data dir, but there is no declared Feishu token variable or other secrets. This mismatch (declared no credentials vs. stated default notification / platform requirements) is a design inconsistency you should resolve before running in production.
Persistence & Privilege
always:false and the skill does not request elevated system privileges. It writes data under ~/.openclaw/workspace/deal-hunter-data and maintains local seen_ids and alerts_log files — expected for a monitoring skill. It does not attempt to modify other skills' configs or system-wide settings in the visible files.
What to consider before installing
What to consider before installing or running this skill:
- Functional gaps: The included monitor.py is a demo that returns simulated results. The SKILL.md says it will push to Feishu by default, but no Feishu webhook/token is declared — confirm how notifications are actually delivered before assuming automatic pushes.
- Credentials & cookies: Real scraping of platforms like 闲鱼 and 链家 generally requires login state (cookies or username/password) or browser automation (Selenium). The skill does not declare or request these secrets, so you will need to provide and manage them yourself. Treat those credentials as sensitive and avoid storing them in plaintext or broadly accessible locations.
- Dependencies: The script references using Selenium and browser simulation in comments; there is no install step or dependency list. Installing drivers, browsers, and Python packages will be a manual step. Run these in a controlled environment (container or VM) to limit blast radius.
- Data storage: The skill stores seen IDs and alerts under ~/.openclaw/workspace/deal-hunter-data. Confirm you’re comfortable with the skill writing files there and set appropriate file permissions if needed.
- Legal/ToS and rate limits: Scraping some platforms can violate terms of service or lead to IP blocking. The references include rate-limit and anti-bot strategies; follow them and consider using official APIs where available.
- Security posture: Because the demo indicates you must replace simulated fetchers with real scraping code, carefully review any custom scraping implementation for hidden network calls or exfiltration. If you want a deeper review, provide the full, non-truncated monitor.py (the provided file was partially truncated in the package listing) and any code you plan to add for real scraping or notification delivery.
- Recommended safe steps: run the skill in a sandbox or container, limit network access for the container to only the target platforms and notification endpoint, provide credentials via a secrets manager or environment variables with restricted scope, and test on non-production accounts first.
Confidence note: medium — the assessment is limited because parts of monitor.py were truncated in the package listing and the skill delegates important behavior (login, push notifications) to either other skills or unimplemented code; seeing the full code that performs notification or real scraping would raise confidence and could change the verdict.Like a lobster shell, security has layers — review code before you run it.
latestvk977ysc1fppgjza11316kx3bhx84hxzq
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
