Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Academic Survey Self Improve

v1.0.0

高质量学术综述自动生成器。支持 arXiv 实时搜索、新颖性检测、质量控制循环、自动优化。每小时生成 10+ 页高质量综述。

0· 259·2 current·2 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The code aligns with the stated purpose in many places: multiple modules query arXiv, analyze paper text, assemble LaTeX and compile PDFs. However the improver module depends on an LLMEvaluator (evaluator.py) which plausibly calls an external LLM service; the skill metadata and SKILL.md declare no required environment variables or credentials for LLM APIs. That is an incoherence: an LLM-driven improvement step would normally require API keys or a declared primary credential. Also SKILL.md mentions 'send report' in the workflow but no destination or required credentials are declared.
!
Instruction Scope
SKILL.md instructs running main.py (including --auto and --quality) and even suggests scheduling hourly cron jobs to auto-generate surveys. The runtime instructions are broad (fully automated, hourly generation, 'send report') and give the skill discretion to search arXiv extensively, write files (TeX, PDFs, topic_history.json), and iterate. The instructions do not document external endpoints for reporting or which credentials are needed for LLM calls — this open-ended autonomy is a scope/visibility concern. The included code does perform network calls to arXiv and file writes; there are subprocess calls to pdflatex. No explicit instructions tell the agent to read unrelated system files, but the SKILL.md's 'send report' step is vague and unaccounted-for in the visible code excerpts.
Install Mechanism
No install spec is present (instruction-only installation); the skill bundle contains Python source files which will be placed in the skill workspace. There are no external binary downloads or archive extracts in the provided manifest. This is the lower-risk install pattern compared with arbitrary remote downloads.
!
Credentials
The package declares no required environment variables or credentials, yet improver.py imports and calls an LLMEvaluator from evaluator.py. LLM evaluators typically require API keys (e.g., OPENAI_API_KEY) or service endpoints. That missing declaration is disproportionate — either the evaluator uses a local LLM (which should be documented) or it expects secret credentials that are not declared. Additionally, SKILL.md references sending reports but provides no details or required auth; that omission increases risk of unexpected data exfiltration if the code posts results externally.
Persistence & Privilege
The skill is not marked always:true and does not request elevated platform privileges. It writes its own topic_history.json into its output directory (normal) and suggests cron scheduling in examples (user action). There is no evidence it modifies other skills or system-wide agent configuration.
What to consider before installing
This package mostly does what its README says (fetch arXiv, build LaTeX, iterate), but there are gaps you should resolve before installing or scheduling it to run automatically: 1) Inspect evaluator.py and main.py to see whether they call external LLM APIs (look for requests, openai, http.post, socket, or environment reads like OPENAI_API_KEY). If the evaluator needs API keys, the skill should declare that — otherwise it may fail or attempt to use credentials implicitly. 2) Search the codebase for any network POST/PUT calls or hardcoded endpoints (especially anything that would 'send report') and confirm where outputs are sent. 3) Confirm you want an hourly cron job that will generate many PDFs and do repeated network requests (rate limits, storage growth, CPU from pdflatex). 4) Run the skill in a sandboxed environment first (isolated user account, limited network) and monitor outbound connections and files created. 5) Verify the claimed provenance (the SKILL.md lists GitHub/ClawHub links but 'Source' is unknown); prefer skills from known authors or inspect the entire code for secrets/obfuscated behavior. If you want, provide evaluator.py and main.py contents and I can check them specifically for API calls, credential reads, and outbound endpoints.

Like a lobster shell, security has layers — review code before you run it.

latestvk970erhzbdd2b7kmdkpg77wdrh82jgd1

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments