Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

ResearchClaw

v1.0.0

Autonomous research pipeline skill for Claude Code. Given a research topic, orchestrates 23 stages end-to-end: literature review, hypothesis generation, expe...

0· 234·2 current·2 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for dongsheng123132/researchclaw.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "ResearchClaw" (dongsheng123132/researchclaw) from ClawHub.
Skill page: https://clawhub.ai/dongsheng123132/researchclaw
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install researchclaw

ClawHub CLI

Package manager switcher

npx clawhub@latest install researchclaw
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The SKILL.md describes a full CLI and Python package (researchclaw CLI, researchclaw.* modules) and a 23-stage pipeline that runs code and exports artifacts, but the registry entry contains no binaries, no code files, and no install spec. The skill also references needing an LLM API key but the manifest lists no required environment variables or primary credential. This is an incoherent packaging: either the package is missing from the registry or the instructions expect tools that are not provided.
!
Instruction Scope
Runtime instructions tell the agent to create/modify config.yaml, read/write artifacts, execute generated experiment code via subprocess (sandbox) or run experiments remotely over SSH (ssh_remote), and to use an LLM API key from config or env. Those actions enable arbitrary code execution and remote commands. The SKILL.md does not limit or explicitly declare the credentials, SSH keys, Python interpreter, or packages required, nor does it constrain where outputs are sent.
!
Install Mechanism
There is no install spec and no code files. That reduces installation risk but is inconsistent with instructions that require a CLI binary and Python package. If the skill expects a preinstalled third-party package, that expectation should be declared. The absence of an install mechanism makes it unclear how the runtime components would be provided, creating a coherence problem and possible risk if users attempt to fetch/install artifacts from unknown sources to satisfy these instructions.
!
Credentials
Manifest lists no required env vars, yet instructions require the user's LLM API key (in config.yaml or via an env var), reference experiment.sandbox.python_path with numpy, and support ssh_remote mode (which implies SSH credentials). Required secrets/keys are not declared. The skill's ability to execute generated code and run SSH commands means access to sensitive credentials or environments would be particularly impactful; those privileges should be explicitly declared and justified.
Persistence & Privilege
The skill does not request permanent 'always' inclusion and has no install shim, which is appropriate. However, the pipeline supports an --auto-approve flag that bypasses human gates and the skill is allowed to be invoked autonomously (platform default). Combined with the ability to execute code and perform SSH remote runs, auto-approval + autonomous invocation increases operational risk and should be constrained by the user.
What to consider before installing
This skill's docs describe a heavy, runnable pipeline (CLI + Python package) that executes generated code locally or over SSH and uses your LLM API key, but the published bundle contains only instructions and no code or install info. Before using: 1) Do not run --auto-approve or ssh_remote unless you trust the source and control the remote host/keys. 2) Ask the publisher for the canonical source repository, an install method (trusted package or GitHub release), and for explicit declarations of required environment variables (LLM key name, SSH key usage). 3) Inspect any config.yaml you create for embedded secrets and avoid putting primary API keys in unencrypted files. 4) If you test it, run in an isolated VM/container with no sensitive data and without network access to untrusted hosts. If the author cannot provide source code or a trusted install artifact, treat this skill as unsafe to run.

Like a lobster shell, security has layers — review code before you run it.

academicvk97a53p7qrdbnzm3qj2abhvvms834gd6autonomousvk97a53p7qrdbnzm3qj2abhvvms834gd6latestvk97a53p7qrdbnzm3qj2abhvvms834gd6latexvk97a53p7qrdbnzm3qj2abhvvms834gd6paper-writingvk97a53p7qrdbnzm3qj2abhvvms834gd6pipelinevk97a53p7qrdbnzm3qj2abhvvms834gd6researchvk97a53p7qrdbnzm3qj2abhvvms834gd6
234downloads
0stars
1versions
Updated 22h ago
v1.0.0
MIT-0

ResearchClaw — Autonomous Research Pipeline Skill

Description

Run ResearchClaw's 23-stage autonomous research pipeline. Given a research topic, this skill orchestrates the entire research workflow: literature review → hypothesis generation → experiment design → code generation & execution → result analysis → paper writing → peer review → final export.

Trigger Conditions

Activate this skill when the user:

  • Asks to "research [topic]", "write a paper about [topic]", or "investigate [topic]"
  • Wants to run an autonomous research pipeline
  • Asks to generate a research paper from scratch
  • Mentions "ResearchClaw" by name

Instructions

Prerequisites Check

  1. Verify config file exists:
    ls config.yaml || ls config.researchclaw.example.yaml
    
  2. If no config.yaml, create one from the example:
    cp config.researchclaw.example.yaml config.yaml
    
  3. Ensure the user's LLM API key is configured in config.yaml under llm.api_key or via llm.api_key_env environment variable.

Running the Pipeline

Option A: CLI (recommended)

researchclaw run --topic "Your research topic here" --auto-approve

Options:

  • --topic / -t: Override the research topic from config
  • --config / -c: Config file path (default: config.yaml)
  • --output / -o: Output directory (default: artifacts/rc-YYYYMMDD-HHMMSS-HASH/)
  • --from-stage: Resume from a specific stage (e.g., PAPER_OUTLINE)
  • --auto-approve: Auto-approve gate stages (5, 9, 20) without human input

Option B: Python API

from researchclaw.pipeline.runner import execute_pipeline
from researchclaw.config import RCConfig
from researchclaw.adapters import AdapterBundle
from pathlib import Path

config = RCConfig.load("config.yaml", check_paths=False)
results = execute_pipeline(
    run_dir=Path("artifacts/my-run"),
    run_id="research-001",
    config=config,
    adapters=AdapterBundle(),
    auto_approve_gates=True,
)

# Check results
for r in results:
    print(f"Stage {r.stage.name}: {r.status.value}")

Option C: Iterative Pipeline (multi-round improvement)

from researchclaw.pipeline.runner import execute_iterative_pipeline

results = execute_iterative_pipeline(
    run_dir=Path("artifacts/my-run"),
    run_id="research-001",
    config=config,
    adapters=AdapterBundle(),
    max_iterations=3,
    convergence_rounds=2,
)

Output Structure

After a successful run, the output directory contains:

artifacts/<run-id>/
├── stage-1/                # TOPIC_INIT outputs
├── stage-2/                # PROBLEM_DECOMPOSE outputs
├── ...
├── stage-10/
│   └── experiment.py       # Generated experiment code
├── stage-12/
│   └── runs/run-1.json     # Experiment execution results
├── stage-14/
│   ├── experiment_summary.json  # Aggregated metrics
│   └── results_table.tex        # LaTeX results table
├── stage-17/
│   └── paper_draft.md      # Full paper draft
├── stage-22/
│   └── charts/             # Generated visualizations
│       ├── metric_trajectory.png
│       └── experiment_comparison.png
└── pipeline_summary.json   # Overall pipeline status

Experiment Modes

ModeDescriptionConfig
simulatedLLM generates synthetic results (no code execution)experiment.mode: simulated
sandboxExecute generated code locally via subprocessexperiment.mode: sandbox
ssh_remoteExecute on remote GPU server via SSHexperiment.mode: ssh_remote

Troubleshooting

  • Config validation error: Run researchclaw validate --config config.yaml
  • LLM connection failure: Check llm.base_url and API key
  • Sandbox execution failure: Verify experiment.sandbox.python_path exists and has numpy installed
  • Gate rejection: Use --auto-approve or manually approve at stages 5, 9, 20

Tools Required

  • File read/write (for config and artifacts)
  • Bash (for CLI execution)
  • No external MCP servers required for basic operation

Comments

Loading comments...