Back to skill
Skillv0.1.0

ClawScan security

In Silico Perturbation Oracle · ClawHub's context-aware review of the artifact, metadata, and declared behavior.

Scanner verdict

ReviewMar 13, 2026, 9:43 AM
Verdict
Review
Confidence
medium
Model
gpt-5-mini
Summary
The skill's purpose (in‑silico gene knockout predictions) is plausible, but there are multiple coherence issues in its instructions and configs (missing declared env vars, packaging/import mismatches, and simulated behavior unless heavy model packages are installed) that warrant caution before installing or running it.
Guidance
Before installing or running this skill, consider the following: - Clarify MODEL_DIR and packaging: the configs reference ${MODEL_DIR} but the skill does not declare that environment variable or where to place model weights. Decide where large pretrained weights will live and set MODEL_DIR accordingly. - Verify install commands and package names: SKILL.md instructs pip install geneformer and scgpt, but requirements.txt omits them and the code is shipped as a script (not an importable package). Confirm whether an importable package is provided or you should run the script directly. - Expect simulated behavior unless you obtain the real foundation models: bundled code falls back to mock/simulated outputs when model packages/weights are absent — do not treat those simulated outputs as biologically validated results. - Model weight downloads and resource usage: installing and running real models will likely download large weights and consume significant CPU/GPU, disk, and network — review where they are fetched from and prefer offline/local model storage if data exfiltration or bandwidth are concerns. - Security and bioethics: this tool produces hypotheses about gene perturbations that could influence biological research. It should not be used directly for clinical decisions. Review institutional policies and biosafety/ethics rules before using outputs to plan wet lab experiments. - Run in an isolated environment: use a dedicated virtualenv or container and inspect pip packages before installation. Review the full main.py to confirm there are no hidden network calls or unexpected file reads/writes (the provided manifest appears local-only but the full script should be audited). If these coherence issues (MODEL_DIR omission, packaging/import mismatch, simulated fallbacks) are resolved and you confirm model sources, the skill's footprint is consistent with its stated purpose. Otherwise treat it with caution.

Review Dimensions

Purpose & Capability
noteThe declared purpose (virtual KO using Geneformer/scGPT) matches the code and configs: adapters for geneformer and scGPT are present and the tool simulates perturbations. However, SKILL.md and examples present the project as an importable Python package (e.g., 'from in_silico_perturbation_oracle import PerturbationOracle') even though the repository only includes scripts/main.py and no packaging metadata; requirements.txt lists many dependencies but omits the model-specific packages (geneformer, scgpt) which SKILL.md tells users to pip-install. This mismatch between how users are instructed to use the skill and the actual artifact is an incoherence.
Instruction Scope
concernRuntime instructions ask users to install third‑party model packages and run scripts/main.py, which is consistent, but the docs rely on an environment variable placeholder (${MODEL_DIR}) for model paths without declaring or requiring MODEL_DIR. The SKILL.md advertises 'production ready' and full analyses, but the bundled code contains mock/simulated implementations when models are not installed — meaning the skill will produce naive random/simulated outputs unless large foundation models are present. The instructions also imply a package API that isn't provided by the file layout. These gaps grant broad discretion (e.g., where to get model weights) and could mislead users about the fidelity of predictions.
Install Mechanism
noteThere is no formal install spec in the registry (instruction-only install). SKILL.md suggests pip installing common packages and model packages (geneformer, scgpt). Installing via pip is standard, and no downloads from arbitrary URLs are present in the manifest. That said, the skill will likely require downloading large model weights at runtime (not managed by the skill manifest) — this is operationally significant but not inherently malicious.
Credentials
concernThe skill declares no required environment variables, but config files reference ${MODEL_DIR} for pretrained model paths. The absence of MODEL_DIR in requires.env is an inconsistency: the code/config expect an externally supplied path but the registry doesn't surface that requirement. There are no credential requests, which is appropriate, but the unmentioned MODEL_DIR and the potential for model packages to fetch weights from remote hosts are proportionality issues that should be clarified.
Persistence & Privilege
okThe skill does not request persistent/always-on presence, does not declare system-level privileges or modifications, and has no required config paths like agent settings. 'always' is false; autonomy is enabled by default but not combined with other red flags. No file writes beyond expected outputs (results/) are indicated in the manifest.