Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
PubMed Review
v1.0.1飞书自然语言触发的 PubMed 文献检索与 AI 综述生成系统。支持专业检索式扩展、限定词过滤、AI 结构化综述(brief+full)、飞书通知、追问回答。
⭐ 0· 64·0 current·0 all-time
by@crayfish-ai·duplicate of @crayfish-ai/pubmed-review (1.0.1)·canonical: @crayfish-ai/pubmed-review-skill
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The skill's code and scripts implement PubMed E-utilities search, abstract parsing, LLM-based summarization, task queuing, and Feishu notification — which matches the name/description. However, registry metadata at the top of the package listing claims "Required env vars: none" while SKILL.md and skill.json require MINIMAX_API_KEY (sensitive). There are also small metadata mismatches (published/version strings/homepage vs 'source: unknown'). These inconsistencies should be clarified.
Instruction Scope
Runtime instructions and scripts operate on local task/result files, read a local .env.minimax by default, call the MiniMax LLM API with article abstracts, and invoke an external notify script to push messages. All of that is within the declared purpose, but two scope items deserve attention: (1) the code automatically loads an env file into process environment (potentially setting unrelated secrets), and (2) article abstracts (medical content) and user queries are transmitted to a third-party LLM (api.minimax.chat) — confirm that is acceptable for your data/privacy requirements.
Install Mechanism
No install spec is provided; the skill is instruction/code-only and does not download arbitrary archives or run a remote installer. This is lower risk than skills that fetch remote binaries. The package contains only Python and shell scripts that will be run locally.
Credentials
The only sensitive credential required is MINIMAX_API_KEY (used to call the MiniMax LLM) and that is proportional to the LLM summarization functionality. Other configurable items (MINIMAX_API_URL, MINIMAX_MODEL, NOTIFY_PATH, MINIMAX_ENV_FILE) are reasonable. However, the package will (by default) load and export all variables from a .env.minimax file into os.environ — this can unintentionally expose or override unrelated environment variables and may cause unintentional leakage if that file contains other secrets. Also the registry-level metadata incorrectly reported no required env vars, which is misleading.
Persistence & Privilege
The skill does not request 'always: true', does not require root, and confines writes to its own task/result directories. It creates/modifies local files (tasks queue, results, followup state) which is expected for a queue/processor. It does open a lock file for dispatching; nothing indicates system-wide persistence or modification beyond the skill directory.
What to consider before installing
This skill appears to implement the advertised PubMed search + AI summarization pipeline, but please check these before installing:
1. Metadata mismatch: the registry header claims no required env vars, but SKILL.md and skill.json require MINIMAX_API_KEY (sensitive). Confirm which is authoritative before supplying secrets.
2. Secrets handling: the code automatically loads a .env.minimax file into environment variables. Ensure that file only contains the MiniMax API key (and nothing else you don't want imported or uploaded) and that its filesystem permissions are restricted.
3. Data exfiltration / privacy: article abstracts and user queries are sent to api.minimax.chat for LLM summarization. If abstracts include any sensitive or patient-identifiable information, do NOT send them to an external LLM without approval.
4. Notify script trust: the skill invokes an external notify binary/script (NOTIFY_PATH). Verify that notify is a trusted program (path is not user-controlled by untrusted actors) because the skill will call it with generated content.
5. Automation risk: scheduled usage (cron) and the task_dispatcher will automatically run the scripts and call external services. If you plan to deploy, run it in an isolated environment and test with non-sensitive data first.
6. Confirm provenance: the top-of-package source/homepage entries are inconsistent (some places say unknown, skill.json references a GitHub repo). If provenance matters, validate the upstream repository and author before trusting the code.
If you proceed, review .env.minimax contents, validate the notify executable, and run the package in a controlled environment. If you want, I can point to specific lines that load the env file, call the LLM, and invoke notify so you can audit them more closely.Like a lobster shell, security has layers — review code before you run it.
ai-summaryvk977wge51nnzg4c0yx2whj67sn84jsmnfeishuvk977wge51nnzg4c0yx2whj67sn84jsmnlatestvk977wge51nnzg4c0yx2whj67sn84jsmnliterature-searchvk977wge51nnzg4c0yx2whj67sn84jsmnpubmedvk977wge51nnzg4c0yx2whj67sn84jsmnskillvk977wge51nnzg4c0yx2whj67sn84jsmn
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
