Back to skill
Skillv1.0.4

ClawScan security

药物经济学评价技能包 · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

BenignApr 26, 2026, 12:53 AM
Verdict
benign
Confidence
high
Model
gpt-5-mini
Summary
The skill's files, instructions, and requirements are internally consistent with a pharmacoeconomic evaluation toolkit and do not request unrelated credentials or install remote code; only minor documentation/parameter inconsistencies and typos were found.
Guidance
This package appears coherent and implements local pharmacoeconomic calculations. Before installing or running it: 1) Run it in a virtualenv or isolated environment and install only the declared Python libs (numpy, pandas, matplotlib); 2) Review and correct obvious documentation/parameter issues (example: a QALY example sets discount_rate=0.45 which is almost certainly a typo—recommended is ~0.03; example thresholds vary between 100,000 and 120,000 RMB across files); 3) Inspect any omitted/truncated files referenced in docs (e.g., xstpe/data/parameters.py, .codebuddy paths) to ensure they don't contain unexpected behavior; 4) If you will run with real patient data, ensure you follow local data‑privacy controls (do not commit PHI to remote systems); 5) If you plan to integrate this into automated agents, review the code for any future network or file I/O additions—currently there are no network endpoints or credential requests. If you want, I can list the exact lines where the inconsistent numbers/typos appear and suggest concrete fixes.

Review Dimensions

Purpose & Capability
okName/description match the included scripts and documentation: the package provides CEA/CUA/CBA, QALY/ICER calculations, PSA/Monte‑Carlo and budget impact analyses. There are no unexpected environment variables, binaries, or external service credentials requested.
Instruction Scope
noteSKILL.md instructs the agent to use the local scripts (e.g., cost_effectiveness_analysis.py, monte_carlo_simulation.py). That is within scope. Minor issues: inconsistent numeric examples across docs (e.g., SKILL.md example uses threshold=100000 while manifests/examples use 120000), a suspicious-looking discount_rate=0.45 in one QALY snippet (likely a typo vs recommended 0.03), and some documentation paths (e.g., .codebuddy, xstpe/data/parameters.py, Windows path in manifest) that appear to be leftover examples rather than required runtime behavior. These are quality issues but not scope creep or exfiltration instructions.
Install Mechanism
okNo install spec is provided (instruction-only install), and the manifest suggests standard Python dependencies (numpy, pandas, matplotlib). No network/download/install URLs or archive extraction are present in the package metadata.
Credentials
okThe skill requests no environment variables, no credentials, and no config paths. All computations are local and consistent with the stated purpose; there are no requests for unrelated secrets or system access.
Persistence & Privilege
okalways is false and disable-model-invocation is default (agent may call autonomously, which is normal). The package does not declare any special persistent privileges or modify other skills or system-wide settings.