文献综述自动器
PassAudited by ClawScan on May 14, 2026.
Overview
This looks like a purpose-aligned literature review helper, but it uses external academic services and can optionally send data to an LLM provider with an API key.
This skill appears safe for normal public literature-review tasks. Before installing, use a virtual environment, be careful with unpinned Python dependencies, keep LLM writing disabled for sensitive research topics unless you trust the provider, protect any LLM API key, and verify the generated citations and summaries.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Installing the skill may pull in third-party Python packages whose exact versions are not fixed.
The skill depends on PyPI packages using lower-bound version ranges and no lockfile or hash pinning. This is normal for many Python projects, but it leaves dependency versions to be resolved at install time.
requests>=2.28.0 scikit-learn>=1.0.0 numpy>=1.21.0 sentence-transformers>=2.0.0 bertopic>=0.12.0
Install in a virtual environment, review dependencies, and pin versions or use a lockfile if you need reproducible installs.
If you enable LLM polishing, the configured API key can authorize provider usage and possible charges.
When optional LLM writing is enabled, the code reads an LLM API key from configuration and uses it as a bearer token for the configured provider.
api_key = config.get("llm_api_key") ... "Authorization": f"Bearer {api_key}"Use a scoped API key, keep it out of shared files, and verify the configured LLM endpoint before enabling this mode.
Someone on the network path could potentially observe or alter arXiv search traffic, including research topics.
The arXiv query endpoint is configured with plain HTTP rather than HTTPS, so search terms and returned feed data may not be protected in transit.
url = "http://export.arxiv.org/api/query"
Avoid confidential search topics over this path, and prefer switching the arXiv endpoint to HTTPS if supported in your environment.
The generated review could contain inaccurate, biased, or prompt-influenced text if retrieved abstracts are poor quality or adversarial.
Retrieved paper titles and abstracts are inserted into the optional LLM prompt. Because this text comes from external sources, misleading or adversarial content could influence the generated review.
paper_summaries = "\n".join([f"- {p['title']} ({p.get('year','')}): {p.get('abstract','无摘要')[:200]}..." for p in papers[:15]])Treat outputs as drafts, verify citations and claims against original papers, and consider adding prompt delimiters or filtering if enabling LLM writing.
