Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Agricultural Output Forecasting
v1.4.0Agricultural Product Output Forecasting Based on Big Data. Predicts crop yields and agricultural output using historical data, weather patterns, and market t...
⭐ 0· 384·2 current·2 all-time
byjoe@andyxcg
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
high confidencePurpose & Capability
The skill's stated purpose (forecasting) aligns with the included forecasting code, but the registry metadata claims no required environment variables while the code requires billing credentials (SKILLPAY_API_KEY / SKILLPAY_SKILL_ID) to call an external billing API. README/other docs also mention different env var names (SKILL_BILLING_API_KEY / SKILL_ID). This mismatch between what is declared and what the code needs is incoherent and could cause accidental credential exposure.
Instruction Scope
Runtime docs instruct running scripts that read/write files under ~/.openclaw/ (trial and subscription data) and call an external billing endpoint. The SECURITY/FAQ claims (e.g., 'No agricultural data is ever stored' and 'User ID hashed') contradict the code: TrialManager stores user_id as a JSON key (raw), and the SkillPay integration sends user_id/API key to skillpay.me. The SKILL.md also contains repeated promotional content and inconsistent trial counts (200 vs 10), indicating sloppy or misleading documentation.
Install Mechanism
There is no install spec (instruction-only in registry), but the skill includes multiple executable scripts (python scripts and a shell daemon). That is lower-risk than remote installers, but the presence of auto-evolve-daemon.sh means the codebase can be run to create a persistent background process if executed. No external downloads or obscure URLs are used in the provided files.
Credentials
The skill requires billing credentials to function, but the registry declares no required env vars. Furthermore, the code and docs disagree on env var names (SKILLPAY_API_KEY / SKILLPAY_SKILL_ID vs SKILL_BILLING_API_KEY / SKILL_ID), increasing the chance a user will mistakenly expose the wrong secret. Optional keys (OpenWeather, OpenAI) are mentioned in docs but not consistently enforced. Trial data is stored locally and contains raw user IDs despite docs claiming hashed storage.
Persistence & Privilege
Files include auto-evolve-daemon.sh which runs self_evolve.py in an infinite loop and writes logs into the skill directory; while the skill is not set always:true, the presence of a provided daemon means the skill's author expects/encourages running persistent background processes. This is a risk if users blindly start the daemon — it will run indefinitely and execute the packaged self-evolve logic. The daemon does not, in the presented code, reach out to remote endpoints, but persistent processes increase attack surface and should be treated carefully.
What to consider before installing
Before installing or running this skill:
- Do not set or expose any real billing or API keys until you audit the code. The registry metadata says 'no env vars' but the scripts expect billing keys (SKILLPAY_API_KEY / SKILLPAY_SKILL_ID) — the README and other docs use different names; this inconsistency can cause accidental credential leaks.
- Inspect the code paths that access ~/.openclaw/. Trial data is written as JSON keyed by user_id (raw), contrary to claims that user IDs are hashed. If you care about privacy, either run the skill in a sandboxed account or modify TrialManager to hash/avoid storing raw identifiers.
- The package includes auto-evolve-daemon.sh (an infinite loop that runs self_evolve.py). Do not run that daemon unless you understand and trust the self-evolution logic. If you don't need persistent background behavior, avoid running the shell script and remove it from the skill directory.
- Verify the billing endpoint (https://skillpay.me) independently before supplying credentials. Consider testing in demo mode only (python scripts/forecast.py --demo ...) and confirm network calls using a network monitor or run inside an isolated environment/container.
- Address documentation mismatches (trial counts, versions, env var names). If you plan to use this in production, request the maintainer to fix docs and provide a minimal surface (clear required env vars) and to change trial storage to not record raw user IDs.
If you're unsure, run the skill in a disposable VM or container and/or ask the publisher for an authoritative manifest (which env vars are required and why) before granting any sensitive permissions.Like a lobster shell, security has layers — review code before you run it.
latestvk9724k8w85hgb37aqkn6f91v5d83cbse
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
