Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Grant Mock Reviewer

v0.1.0

Simulates NIH study section peer review for grant proposals. Triggers when user wants mock review, critique, or evaluation of a grant proposal before submiss...

0· 59·0 current·0 all-time
byAIpoch@aipoch-ai
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name, description, and shipped code align: the repository contains a GrantMockReviewer class, weakness patterns, scoring heuristics, and templates used to produce NIH-style critiques and summary statements. There are no environment variables, external service credentials, or unrelated binaries requested that would be inconsistent with a local reviewer tool.
Instruction Scope
SKILL.md instructs the agent to run scripts/main.py on proposal files (pdf, docx, txt, md) and/or call the library API passing proposal_text. The included main.py operates on provided text and uses regex-based analysis. However, the README/usage claims support for PDF/DOCX input but requirements.txt does not list PDF/DOCX parsing libraries and no install spec is provided—this is an operational inconsistency (the skill may rely on external converters or missing libs/binaries not declared). Also: the tool processes whatever proposal text you give it (including sensitive/unpublished proposals); the instructions do not transmit data externally, and no external endpoints are referenced in the SKILL.md.
Install Mechanism
No install spec is provided (instruction-only skill with included code). The repository contains only a tiny requirements.txt and no downloads or remote install steps, so nothing arbitrary will be fetched or executed at install time by the skill itself.
Credentials
The skill requests no environment variables, credentials, or config paths. The lack of secrets/credentials requested is appropriate and proportionate for a local text-analysis reviewer tool.
Persistence & Privilege
The skill is not flagged always:true and uses normal user-invocable/autonomous invocation defaults. It does not request persistent system-wide privileges or modify other skills. No evidence that it writes beyond its own output files (expected: review outputs).
Assessment
This skill looks coherent and runs locally on the text you provide, with no declared network or credential access. Before installing, consider: 1) Confidentiality — the tool will process the full content of any proposal you pass to it, so avoid sending unpublished sensitive proposals to any remote service; run it in an isolated/local environment if secrecy matters. 2) File-format support — SKILL.md claims PDF/DOCX input, but requirements.txt is minimal and no parser libraries or install steps are declared; verify scripts/main.py actually includes robust PDF/DOCX extraction or install the needed converter (e.g., pdfminer, python-docx or a pdftotext binary) before relying on the CLI. 3) Review the full scripts/main.py yourself (or run in a sandbox) to confirm there are no unexpected network calls or file accesses beyond the proposal files and intended outputs. If you need stricter guarantees, run the tool offline and/or inspect/modify the code to add explicit input/output controls.

Like a lobster shell, security has layers — review code before you run it.

latestvk97fvtw4cqca36gfnvsythr16x83evrc

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments