Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

virtual-cell-reasoner

v0.0.1

Consult a virtual cell language model on single-cell tasks — cell generation, cell understanding, cell perturbation, and biology Q&A using cell token sequences.

0· 90·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for wxuanyuan/virtual-cell-reasoner.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "virtual-cell-reasoner" (wxuanyuan/virtual-cell-reasoner) from ClawHub.
Skill page: https://clawhub.ai/wxuanyuan/virtual-cell-reasoner
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: python
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install virtual-cell-reasoner

ClawHub CLI

Package manager switcher

npx clawhub@latest install virtual-cell-reasoner
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill claims to consult a virtual cell LLM and includes a Python client that POSTS prompts to a chat endpoint — this is consistent with the stated purpose. However, the default SERVER_URL is a personal/ephemeral ngrok domain with no homepage or owner contact, and the package does not declare the Python 'requests' dependency in metadata. The origin and trustworthiness of the service are unclear.
!
Instruction Scope
SKILL.md instructs the user/agent to run call_api.py which will transmit whatever prompt is provided to the remote /chat endpoint. That means any input (including sensitive biological data or credentials accidentally pasted) will be sent to the external server. The instructions do not limit or warn about this data transfer, and the default server is an off-platform ngrok URL, increasing exfiltration risk.
Install Mechanism
There is no install spec (instruction-only plus a single included Python client). No archives or third-party install URLs are used, which lowers install-time risk. Note: runtime requires the 'requests' Python package but that is not listed in the skill metadata.
Credentials
The skill does not request any environment variables, credentials, or config paths, which is proportionate for an LLM client. However, because the client posts data to a hardcoded external endpoint, the absence of declared credentials does not eliminate the risk that sensitive input could be exfiltrated to an untrusted service.
Persistence & Privilege
The skill does not request persistent privileges, does not set always:true, and does not modify system or other skill configs. It only contains a client script and runtime instructions.
What to consider before installing
This skill will send any prompt you give it to a hardcoded external server hosted on an ngrok domain. Before installing or using it, consider: 1) Do not send sensitive or proprietary biological data or any credentials to this skill. 2) Ask the publisher for the service's provenance, privacy policy, and a stable, official endpoint (ngrok defaults are often personal/ephemeral). 3) If you need this functionality, prefer a skill that points to a verified, documented server or run your own trusted service and pass its URL via --url. 4) Ensure your runtime has the Python 'requests' package installed. 5) If you must test it, run it in an isolated environment and monitor network traffic to the endpoint. If you cannot obtain trustworthy information about the remote service, avoid using the skill with real or sensitive data.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🧬 Clawdis
Binspython
latestvk97e8bwgp0etmzzc19sgg0a7dd83rcd2
90downloads
0stars
1versions
Updated 1mo ago
v0.0.1
MIT-0

Virtual Cell LLM

A language model for single-cell biology. Cells are encoded as token sequences:

A-126 B-090 C-058 D-133 E-074 F-053

Quick start

# Generate a cell from features
python call_api.py "Given the following cell features:\n\n[CellType] neuron\n\nProvide a cell language token sequence.\nReturn ONLY the tokens.\nNo extra text."

# Understand a cell from tokens
python call_api.py "The following tokens encode a cell\n\nA-168 B-090 C-005 D-069 E-232 F-196\nConstruct a requested feature summary based on this encoding:\n\n[CellType][Identity-Associated TFs][General TFs][Ligands][Receptors][TopExpressedGene]\nReturn ONLY canonical feature entries.\nIf a feature cannot be inferred from the tokens, use \"NA\".\n\nDo not provide additional text."

# Ask biology questions
python call_api.py "What transcription factors are associated with T cell identity?"

Useful flags

  • --max-tokens N Max new tokens to generate (default: 2048)
  • --temperature F Sampling temperature (default: 0.7; lower = more deterministic)
  • --url URL Override server base URL (default: ngrok endpoint)

Comments

Loading comments...