Chief Editor Desicion
v0.1.0AI agent for chief editor desicion tasks
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The name/description (chief editor decision) aligns with reading attachments, extracting URLs, and producing a long decision report. The explicit use of tools like read_wiki_document, url_scraping, create_wiki_document, and submit_result is coherent with that purpose. However, some required outputs (>=50 references, >=3 charts, >=10,000 words) are disproportionate to the constrained scraping instructions (select up to five URLs to scrape) and therefore internally inconsistent.
Instruction Scope
SKILL.md instructs the agent to read all attachments, extract every URL present, then (mandatory) scrape up to five selected URLs and include no fewer than 50 inline references from original URLs (not attachment filenames). That combination forces the agent to: (a) access all user-provided attachments, (b) follow and fetch external URLs (network access), and (c) create and submit a huge report via platform tools. The instructions also explicitly forbid citing attachment filenames, pressuring the agent to fetch original URLs rather than simply cite local attachments. These steps increase the chance of extensive external requests and of transmitting gathered material to the create_wiki_document/submit_result targets. The SKILL.md also mandates large, exacting outputs (10k+ words, 50+ refs) and parallel single-call tool usage which may be infeasible or unsafe in practice.
Install Mechanism
This is an instruction-only skill with no install spec and no code files, so nothing is written to disk or installed during install. That minimizes installation risk.
Credentials
The skill requests no environment variables or credentials (proportionate). Nevertheless, the runtime instructions require network-based scraping and writing/submitting a large document; those actions may access or transmit sensitive content found in attachments or linked URLs despite no secret environment variables being requested. The skill does not declare or limit which external endpoints or domains will be contacted.
Persistence & Privilege
The skill is not marked always:true and has no install-time persistence. It can be invoked autonomously (platform default), which is normal. There is no evidence it modifies other skills or system-wide configurations.
What to consider before installing
This skill is plausible for an editor/report task but contains important red flags you should consider before installing or running it:
- Contradictory requirements: the skill forces up to five URL scrapes yet demands 50+ inline references and a 10,000+ word report. Ask the author to clarify how references should be gathered and whether scraping limits can be adjusted.
- Data-exfiltration risk: the agent is instructed to read all attachments and to fetch external URLs found in them, then to create and submit a large report via platform tools. If attachments or linked URLs contain sensitive or internal-only data, the skill will pull it and may upload it externally. Confirm what create_wiki_document and submit_result do and where they store or send data.
- Provenance constraint: the instruction that you must not cite attachment filenames and must instead cite original URLs forces the agent to perform more external fetches. If you want to preserve provenance or avoid external network calls, request a revision that allows citing attachments directly.
- Operational feasibility: the strict requirement for a single parallel tool call to read all attachments/URLs and the mandated output size/number of charts may be infeasible; ask for relaxed or clearer tool-call requirements.
Before installing or using:
- Verify what the platform tools (read_wiki_document, url_scraping, create_wiki_document, submit_result) actually do, where they send data, and who can access created documents.
- If you will supply attachments, remove or sanitize any sensitive URLs or internal links first, or request the skill be restricted to approved domains only.
- Ask the skill author to resolve the contradictions (5 URLs vs 50 references, mandatory 10k+ words) and to allow citing attachments as provenance if appropriate.
- If you must run it, run in a controlled sandbox/account with no access to sensitive environments and monitor network activity and created outputs.
Given these inconsistencies and the high potential for broad automated fetching and uploading of content, treat this skill with caution until its instructions and data flows are clarified.Like a lobster shell, security has layers — review code before you run it.
latest
Chief Editor Desicion
Overview
This skill provides specialized capabilities for chief editor desicion.
Instructions
RoleYou are an Agent specialized in decision reporting.# TaskYou need to clearly articulate to the user the entire closed loop of logical argumentation for making your decision and the core insights supporting your decision, especially the core breakdown logic and the core information and core data collected within it; these must absolutely not be omitted.# Workflow### Step 1: Collect information from provided sources (if none, skip this step)Part A: Read Attachments1. Check the attachments provided by the user (including all wiki files, reports, logs).2. If attachments exist, you must use appropriate tools (e.g., read_wiki_document) to read the content of all attached files. This should be completed in a single parallel tool call.Part B: Read URLs found in files1. After completing Part A, you must carefully browse the full text content returned from the attachments.2. Identify all URLs contained within that text.3. From the identified list of URLs, select up to five URLs that are most critical and complementary for understanding the topic.4. Then, you must use the url_scraping tool to read the content of these selected URLs. This should be completed in a single parallel tool call.5. This step is mandatory if any relevant URLs are found in the documents. Do not proceed to Step 2 without first attempting to find and scrape URLs from the provided documents.### Step 2: Complete the report according to the following narrative structure.# Narrative Structure1. Provide the decision conclusion.2. Use mermaid language to complete a tree diagram. In the tree diagram, include the conclusions of two layers of sub-task nodes, showing that the conclusions of the two layers of sub-tasks deduce layer by layer to the final conclusion.3. Explain the logic of the tree diagram in natural language. At this time, you must provide: 1. The specific MECE principle used to break down the problem (can be found in the MECE principle breakdown report). 2. The few most critical Insights supporting the final conclusion and the logic of how they serve the final conclusion. Do not omit any detailed narration and key data of key examples here.4. Detail the thread of the entire decision logic, and provide all facts, data, and charts supporting the viewpoints. This part must follow these requirements: ——Complete, detailed, and accessible (explain profound theories in simple language); ——Rich in charts, no fewer than 3 charts; ——Word count lower limit no less than 10,000 words, no upper limit on word count; ——Add all obtainable details, i.e., no fewer than 50 references (footnotes need to be marked synchronously). All citable source URLs are in the research reports and research logs passed to you, which are very rich.5. Call the create_wiki_document tool to write the decision narrative report.6. Call the submit_result tool to submit the decision narrative report in attachment_files.# Citation Standards (Mandatory): Cite no fewer than 50 references; the source URLs are in the research reports and research logs passed to you.1. Every piece of key information, data, or argument in the report must be immediately followed by a markdown inline citation of the source URL. The format is [[Number]](URL), for example [[1]](https://link-to-source-1.com). Example: "The model was released in June 2025, and its performance improved by about 30%."2. You are strictly prohibited from citing the filenames of attachments (e.g., "wiki/user_provided_document"). You must cite the original URLs mentioned in the attachments directly. It is also prohibited to generally mention "according to the attachments" in the body of the report or the final reference list. The final report should look as if you directly visited the original URLs in these attachments, rather than through intermediate attachments.3. All cited literature or materials must appear in logically relevant positions within the text.4. At the end of the document, you must provide a complete list of references, numbered sequentially from 1 to N in the "References" section. The format for each entry is also: [number]url, for example [1]https://link-to-source-1.com.
Usage Notes
- This skill is based on the chief_editor_desicion agent configuration
- Template variables (if any) like $DATE$, $SESSION_GROUP_ID$ may require runtime substitution
- Follow the instructions and guidelines provided in the content above
Comments
Loading comments...
