Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Chief Editor Desicion

v0.1.0

AI agent for chief editor desicion tasks

0· 660·2 current·2 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (chief editor decision) aligns with reading attachments, extracting URLs, and producing a long decision report. The explicit use of tools like read_wiki_document, url_scraping, create_wiki_document, and submit_result is coherent with that purpose. However, some required outputs (>=50 references, >=3 charts, >=10,000 words) are disproportionate to the constrained scraping instructions (select up to five URLs to scrape) and therefore internally inconsistent.
!
Instruction Scope
SKILL.md instructs the agent to read all attachments, extract every URL present, then (mandatory) scrape up to five selected URLs and include no fewer than 50 inline references from original URLs (not attachment filenames). That combination forces the agent to: (a) access all user-provided attachments, (b) follow and fetch external URLs (network access), and (c) create and submit a huge report via platform tools. The instructions also explicitly forbid citing attachment filenames, pressuring the agent to fetch original URLs rather than simply cite local attachments. These steps increase the chance of extensive external requests and of transmitting gathered material to the create_wiki_document/submit_result targets. The SKILL.md also mandates large, exacting outputs (10k+ words, 50+ refs) and parallel single-call tool usage which may be infeasible or unsafe in practice.
Install Mechanism
This is an instruction-only skill with no install spec and no code files, so nothing is written to disk or installed during install. That minimizes installation risk.
Credentials
The skill requests no environment variables or credentials (proportionate). Nevertheless, the runtime instructions require network-based scraping and writing/submitting a large document; those actions may access or transmit sensitive content found in attachments or linked URLs despite no secret environment variables being requested. The skill does not declare or limit which external endpoints or domains will be contacted.
Persistence & Privilege
The skill is not marked always:true and has no install-time persistence. It can be invoked autonomously (platform default), which is normal. There is no evidence it modifies other skills or system-wide configurations.
What to consider before installing
This skill is plausible for an editor/report task but contains important red flags you should consider before installing or running it: - Contradictory requirements: the skill forces up to five URL scrapes yet demands 50+ inline references and a 10,000+ word report. Ask the author to clarify how references should be gathered and whether scraping limits can be adjusted. - Data-exfiltration risk: the agent is instructed to read all attachments and to fetch external URLs found in them, then to create and submit a large report via platform tools. If attachments or linked URLs contain sensitive or internal-only data, the skill will pull it and may upload it externally. Confirm what create_wiki_document and submit_result do and where they store or send data. - Provenance constraint: the instruction that you must not cite attachment filenames and must instead cite original URLs forces the agent to perform more external fetches. If you want to preserve provenance or avoid external network calls, request a revision that allows citing attachments directly. - Operational feasibility: the strict requirement for a single parallel tool call to read all attachments/URLs and the mandated output size/number of charts may be infeasible; ask for relaxed or clearer tool-call requirements. Before installing or using: - Verify what the platform tools (read_wiki_document, url_scraping, create_wiki_document, submit_result) actually do, where they send data, and who can access created documents. - If you will supply attachments, remove or sanitize any sensitive URLs or internal links first, or request the skill be restricted to approved domains only. - Ask the skill author to resolve the contradictions (5 URLs vs 50 references, mandatory 10k+ words) and to allow citing attachments as provenance if appropriate. - If you must run it, run in a controlled sandbox/account with no access to sensitive environments and monitor network activity and created outputs. Given these inconsistencies and the high potential for broad automated fetching and uploading of content, treat this skill with caution until its instructions and data flows are clarified.

Like a lobster shell, security has layers — review code before you run it.

latestvk97f0qp47v0bjdtj76qkab1s25816c6j

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments