Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Peer Reviewer

v1.0.0

AI-powered academic paper reviewer. Uses a multi-agent system (Deconstructor, Devil's Advocate, Judge) to analyze papers for logical flaws, contradictions, and empirical validity.

0· 1.8k·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The skill claims to be an academic peer reviewer and the code uses LLM adapters, search adapters (ArXiv, Serper), and local storage — which fits the stated purpose. However, the registry metadata declares no required environment variables or credentials while the code clearly expects multiple provider credentials (OPENAI_API_KEY, GEMINI_API_KEY/GOOGLE_API_KEY, SERPER_API_KEY, GOOGLE_APPLICATION_CREDENTIALS/./google.json, GOOGLE_CLOUD_PROJECT). That mismatch (metadata says 'none' but code requires secrets) is incoherent and surprising to a user.
!
Instruction Scope
SKILL.md instructs running node from a specific absolute development directory (/Users/sschepis/Development/peer-reviewer) and to ensure a google.json file or GOOGLE_APPLICATION_CREDENTIALS — both are environment- and path-sensitive. The runtime code will read local credential files, write reports to ./data, and may execute an external CLI (serper-tool) through child_process. The SKILL.md content also triggered a prompt-injection (system-prompt-override) detection; while the code contains strict LLM output constraints, the presence of prompt-injection patterns in the skill docs is a red flag. Overall the instructions encourage reading/writing local files and sending user content to external LLM/search services (expected for the purpose) but they do so without documenting required secrets or the privacy implications.
Install Mechanism
No install spec is present (instruction-only in registry), which is lower installer risk, but the package includes full source and package.json (npm-style). There are no remote downloads or obscure install URLs. Dependencies are standard (axios, google-auth-library, xml2js, dotenv). Because the package includes code that will be executed locally, the lack of an install manifest in the registry combined with included source is an odd packaging/documentation mismatch, but not inherently malicious.
!
Credentials
The skill requests (in code and README) multiple sensitive credentials and file access: OPENAI_API_KEY, GEMINI_API_KEY / GOOGLE_API_KEY, SERPER_API_KEY, GOOGLE_APPLICATION_CREDENTIALS or ./google.json, and potentially GOOGLE_CLOUD_PROJECT. These are proportional to a multi-provider reviewer, but the registry metadata claims no required env vars — an unexplained omission. The skill will also read a local credentials file if present and write reports to disk, so users must understand that their paper text and any provided credentials will be used to contact external services. Require/declare any secrets in metadata.
Persistence & Privilege
The skill does not request always: true and does not modify other skills or system-wide settings. It persists reports to a local ./data directory and reads local credential files (google.json) if present; that is consistent with being a CLI tool. This level of persistence is expected for a local review tool, but users should note saved reports contain analyzed text and should be protected accordingly.
Scan Findings in Context
[system-prompt-override] unexpected: A prompt-injection pattern was detected in SKILL.md. While the project legitimately uses strong LLM output constraints inside its code prompts (asking for JSON-only responses), the presence of a system-prompt-override pattern in the skill documentation is unexpected and warrants manual review. This could be a benign artifact (overly prescriptive prompts) or an attempt to influence agent/system prompts; treat as suspicious until clarified.
What to consider before installing
What to check before installing/use: - Do not assume 'no required env vars' from registry metadata: the code uses several credentials (OpenAI/Gemini/Google Serper). Confirm which keys you must provide and why. If you don't want network calls, do not set API keys. - Protect any google.json or service-account file — it contains powerful service account keys. If you provide GOOGLE_APPLICATION_CREDENTIALS or a google.json file, the skill will use it to call Vertex/Google APIs. - The skill will send your paper text to external services (OpenAI/Gemini/Vertex/Serper/ArXiv). If your manuscript is sensitive or unpublished, this may be data leakage — consider redaction, running offline, or not providing API keys. - SKILL.md references an absolute local development path (/Users/sschepis/...). That indicates documentation was not adapted for distribution; double-check paths before running to avoid accidentally reading/writing unintended files. Run the tool from an isolated directory or container and inspect the code locally first. - The skill may execute an external CLI (serper-tool) via child_process. If you enable that path, verify the serper-tool binary is trustworthy and understand what it does. - Recommended mitigations: run inside a sandbox or VM, inspect package.json and all source before running, avoid providing high-privilege credentials, or create a least-privilege service account for Google usage. Ask the author to update registry metadata to list required env vars and to remove hard-coded local paths in SKILL.md. If you cannot vet the author/packaging, treat the skill as untrusted and avoid running it on sensitive documents.

Like a lobster shell, security has layers — review code before you run it.

latestvk973kw2mv0hqb8n35v5yb48bhn80jymr

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments