Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Autonomous Procurement Agent

v1.0.0

Enterprise procurement quote parsing and fraud detection. Use when: (1) A supplier quote arrives as messy plain-text, OCR scan, or SAP export, (2) Cross-plat...

1· 49·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
Capability signals
CryptoCan make purchasesRequires OAuth token
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The code and docs implement a dual-engine parser, local risk checks (F1/F2/F3), a webhook license handler, and optional OpenAI fallback — all coherent with the stated procurement purpose. However the registry header you provided claims 'Required env vars: none' and 'instruction-only', while the SKILL.md/manifest and code require LS_WEBHOOK_SECRET (mandatory in production) and include code files (not instruction-only). This metadata mismatch should be resolved before trusting the listing.
!
Instruction Scope
SKILL.md instructs starting a local webhook server (webhook-handler.js) and setting LS_WEBHOOK_SECRET. It also documents optional use of OPENAI_API_KEY to enable LLM fallback, claiming sensitive fields are scrubbed before external calls. The code contains a maskSensitiveData function, but the SKILL.md's strong privacy claim (‘scrubs all sensitive fields before any external call’) cannot be fully verified in the truncated sources shown — it's unclear whether masking is always applied to the exact payload sent to OpenAI. Also the README/manifest/instructions reference local data directories and writing licenses.json — the instructions direct file I/O and network calls (webhook + optional OpenAI) which are within purpose but need explicit confirmation you want a long-running server and local license DB.
Install Mechanism
No hidden download URL or installer script; recommended install is git clone + npm install, and package.json declares only dotenv (and optional openai). No binary downloads from untrusted hosts are present. This is a low-to-moderate install risk consistent with typical Node packages.
!
Credentials
Requested environment variables (LS_WEBHOOK_SECRET mandatory in practice, optional OPENAI_API_KEY, LS_API_KEY, PARSER_DATA_DIR, etc.) are appropriate for a webhook/licensing + optional LLM integration. But the registry metadata you provided claims 'none' which contradicts the manifest and SKILL.md. Additionally, the PROCU_ALLOWED_TIER env var provides a dev fallback that bypasses webhook/license verification — if mis-set in production it would allow unauthorized access to enterprise features. The number and sensitivity of env vars is reasonable for the feature set, but the manifest/metadata mismatches and the bypass variable are notable risks.
Persistence & Privilege
The skill runs a local HTTP webhook server, stores a local license DB and historical-price files under a configurable PARSER_DATA_DIR, and does not request platform-wide or 'always' privileges. It does not modify other skills' configs. Requesting to persist its own license DB and event logs is proportionate to its purpose.
What to consider before installing
Key things to check before installing: - Metadata mismatch: the registry header you gave says 'no required env vars' and 'instruction-only' but the package/manifest/SKILL.md require LS_WEBHOOK_SECRET (server refuses to start without it) and include code files. Do not rely on the top-line registry summary; inspect the manifest and SKILL.md. - LS_WEBHOOK_SECRET is mandatory in practice — keep it secret and treat the webhook server as a service that can receive remote requests. Verify X-Signature verification is working and that the secret is unique. - Do NOT set PROCU_ALLOWED_TIER in production. That env var is an explicit bypass for license checks and will enable enterprise features without proper webhook verification. - If you enable OPENAI_API_KEY: confirm masking is actually applied immediately before the network call. The code includes maskSensitiveData, but the truncated sources do not show every call path; review call sites to ensure no PII/amount/vendor data is sent unmasked. Consider testing with dummy keys and sample inputs. - Review where PARSER_DATA_DIR defaults (self-healing-parser defaults to /tmp/procurement-data; PRIVACY.md mentions ~/.procurement-data) and set a directory you control; check file permissions for licenses.json and historical-prices files. - Because this runs a local long-lived server and writes a local DB, review logs and the sanitize() function in webhook-handler.js to ensure no sensitive values are accidentally logged. The code takes precautions, but regex-based scrubbing can be brittle. - If you need enterprise features, validate the Lemon Squeezy integration end-to-end in a safe environment (ngrok/local dev) before enabling on production systems. If you want, I can: (1) point to the exact places in the code where masking and LLM calls occur so you can verify, (2) produce a short checklist to harden a deployment (systemd unit, restricted data dir, log retention), or (3) re-scan the full files for any other risky patterns.

Like a lobster shell, security has layers — review code before you run it.

latestvk97dbjxbe18e2srwwatzqb409x84bye1

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments