do-it-gungun - 滚滚判断

v1.0.0

帮助地球人做人生选择的 AI 判断技能 - 你只管 do it,判断交给滚滚

0· 80·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The name/description (decision helper) align with the included artifacts: an API server, web UI, data files, and web-crawl scripts used to gather city/salary/case data. The requested capabilities and files are coherent with a data-driven decision assistant.
Instruction Scope
SKILL.md instructs running a local API server, opening local web UI files, and running data-collection (crawl_) scripts. Those instructions stay within the decision-skill scope, but they do direct the agent/operator to run code that will read local data files and perform network requests (web crawling). The instructions also reference explicit filesystem paths (e.g., /home/admin/.openclaw/workspace/skills/do-it), so the agent/operator will interact with local files and the network.
Install Mechanism
Registry metadata says 'instruction-only' / no install spec, but the package contains multiple executable code files (Python scripts, a Node/JS file, web assets). That means installation will place runnable code on disk even though no explicit install steps are declared. There is no automated installer or external archive download (lower risk), but the presence of crawler and server scripts means the skill will perform network I/O if run.
Credentials
The skill does not declare or require any environment variables, credentials, or config paths. That is proportionate for a local, data-driven decision tool. However, the docs mention integrating 'OpenClaw API' or deployed endpoints for production — if you deploy it, you will need to provide appropriate API keys; none are requested by the skill metadata now.
Persistence & Privilege
The skill is not marked always:true and does not request elevated platform privileges. It includes scripts that store/use local JSON data (case history), which is normal for the described functionality. Nothing in the metadata suggests it modifies other skills or system-wide configurations.
Assessment
This project is internally coherent with its stated purpose, but it contains runnable code (api_server.py and multiple crawl_*.py scripts). Before running or deploying: 1) Inspect scripts api_server.py and all crawl_*.py for any network endpoints, hardcoded credentials, or unexpected outbound calls; 2) Run them in a restricted/sandboxed environment (no production secrets) and review outgoing network traffic; 3) Be aware that crawlers will access external sites (respect robots.txt and legal/TOU limits); 4) If you plan to deploy publicly, supply and restrict any API keys and ensure privacy for any collected user case data; and 5) If you lack the ability to audit code, avoid running the server or crawlers on sensitive hosts — prefer using only the web UI static files or asking for a review from someone who can audit the code.

Like a lobster shell, security has layers — review code before you run it.

latestvk975w5mwb4xgtd43506fwvpdzs83jh74

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments