Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

AutoResearchClaw Integration

v1.0.0

Integrates AutoResearchClaw to autonomously generate conference-ready academic papers from user research topics with real citations and experimental code.

0· 385·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for nffdasilva/autoresearchclaw-integration.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "AutoResearchClaw Integration" (nffdasilva/autoresearchclaw-integration) from ClawHub.
Skill page: https://clawhub.ai/nffdasilva/autoresearchclaw-integration
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install autoresearchclaw-integration

ClawHub CLI

Package manager switcher

npx clawhub@latest install autoresearchclaw-integration
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name and description (autonomous research → paper with code/citations) align with the instructions: the SKILL.md explicitly instructs cloning, installing, and running an AutoResearchClaw pipeline that performs literature search, experiments, and paper generation. The capabilities requested (LLM keys, web fetch, messaging) are plausible for the stated purpose.
!
Instruction Scope
Runtime instructions direct the agent to git-clone a GitHub repo, create a virtualenv, pip install the project, run a 23-stage autonomous pipeline with --auto-approve, and optionally enable cron, messaging (Discord/Slack/Telegram), cross-session memory and web fetch. That scope allows executing arbitrary downloaded code, performing network traffic, scheduling recurring jobs, and persisting data — all broader than a simple 'paper-writing helper' and could exfiltrate data or run unexpected experiments if the repo is malicious or buggy.
!
Install Mechanism
There is no vetted install spec in the registry; SKILL.md instructs cloning https://github.com/aiming-lab/AutoResearchClaw.git and pip installing it in editable mode. Downloading and installing unpinned code from a third-party GitHub repo is a high-risk install pattern (no pinned commit or checksum), because it writes and executes arbitrary code on the user's machine.
!
Credentials
Registry metadata declares no required env vars, but the instructions require/ask for LLM API keys (OPENAI_API_KEY or ACP credentials) and optionally Semantic Scholar or messaging credentials. Requesting these keys is plausible for the tool's features, but the mismatch with declared requirements and the potential need for additional messaging/cron credentials (Discord/Slack/Telegram tokens) increases risk and attack surface. The skill also encourages enabling persistent memory and scheduled runs which will retain and reuse credentials or outputs.
!
Persistence & Privilege
The skill itself is not marked always:true, but it instructs enabling features that grant persistent presence (cron scheduled runs, cross-session memory, messaging notifications, spawning sub-sessions, writing to ~/.metaclaw/skills). Those behaviors introduce longer-term persistence and automated execution beyond a single interactive run and should be treated cautiously because they expand the blast radius if the installed code is malicious or vulnerable.
What to consider before installing
This skill will clone and install a third‑party GitHub project and then run an autonomous pipeline that may execute arbitrary code, use your LLM API keys, access the web, and create scheduled jobs or persistent memory. Before installing: (1) review the GitHub repo and pin to a known-good commit; (2) run the install inside an isolated environment (container or disposable VM) rather than on your main host; (3) do not hand over high-privilege or broad API keys — use least-privilege/test keys where possible; (4) avoid enabling cron/web-fetch/messaging/use_memory until you have inspected the code and understood what data is sent externally; (5) verify licensing, maintainer identity, and whether the project has reproducible releases and checksums; (6) if you need assurance, ask the skill author for a homepage, maintainer contact, or an audit of the repository. If you are not comfortable reviewing or sandboxing the code, do not install it.

Like a lobster shell, security has layers — review code before you run it.

latestvk97fm558cawrkvdbby457f96gh835z8n
385downloads
0stars
1versions
Updated 1d ago
v1.0.0
MIT-0

ResearchClaw

AutoResearchClaw is a fully autonomous 23-stage research pipeline that transforms a single research idea into a conference-ready academic paper with real literature from OpenAlex, Semantic Scholar, and arXiv.

Quick Start

Basic Usage

User says: "Research [topic]"

Agent workflow:

  1. Check if AutoResearchClaw is installed (which researchclaw)
  2. If not installed: clone, setup venv, install with pip install -e .
  3. Copy config.researchclaw.example.yamlconfig.arc.yaml
  4. Ask user for LLM provider choice (OpenAI-compatible or ACP agent)
  5. Configure with API keys or ACP agent selection
  6. Run: researchclaw run --topic "[topic]" --auto-approve
  7. Monitor progress, return results from artifacts/rc-*/deliverables/

Configuration

Ask user for LLM backend preference:

Option 1: OpenAI-compatible API

llm:
  provider: "openai-compatible"
  base_url: "https://api.openai.com/v1"
  api_key_env: "OPENAI_API_KEY"  # or ask for key
  primary_model: "gpt-4o"
  fallback_models: ["gpt-4o-mini"]

Option 2: ACP Agent (Claude Code, Codex, Gemini)

llm:
  provider: "acp"
  acp:
    agent: "claude"  # or "codex", "gemini", etc.
    cwd: "."

Installation

Check Installation

which researchclaw || echo "Not installed"

Install AutoResearchClaw

cd ~
git clone https://github.com/aiming-lab/AutoResearchClaw.git
cd AutoResearchClaw
python3 -m venv .venv
source .venv/bin/activate
pip install -e .

Verify Installation

researchclaw --version

Running Research

Basic Command

researchclaw run --topic "Your research idea" --auto-approve

With Specific Config

researchclaw run --config config.arc.yaml --topic "Your research idea" --auto-approve

Output Location

Results in: ~/AutoResearchClaw/artifacts/rc-YYYYMMDD-HHMMSS-<hash>/deliverables/

Deliverables

After completion, the agent should:

  1. Check deliverables/ directory contents
  2. Present key outputs:
    • paper.tex - Conference-ready LaTeX
    • paper_draft.md - Markdown paper
    • references.bib - Real citations
    • verification_report.json - Citation integrity check
    • runs/ - Experimental code and results
    • charts/ - Generated figures
    • reviews.md - Multi-agent peer review
  3. Copy/present relevant sections to user

Pipeline Stages (23 Total)

Phase A: Research Scoping

  • Stage 1: TOPIC_INIT
  • Stage 2: PROBLEM_DECOMPOSE

Phase B: Literature Discovery

  • Stage 3: SEARCH_STRATEGY
  • Stage 4: LITERATURE_COLLECT
  • Stage 5: LITERATURE_SCREEN [gate]
  • Stage 6: KNOWLEDGE_EXTRACT

Phase C: Knowledge Synthesis

  • Stage 7: SYNTHESIS
  • Stage 8: HYPOTHESIS_GEN

Phase D: Experiment Design

  • Stage 9: EXPERIMENT_DESIGN [gate]
  • Stage 10: CODE_GENERATION
  • Stage 11: RESOURCE_PLANNING

Phase E: Experiment Execution

  • Stage 12: EXPERIMENT_RUN
  • Stage 13: ITERATIVE_REFINE
  • Stage 14: RESULT_ANALYSIS
  • Stage 15: RESEARCH_DECISION

Phase F: Analysis & Decision

  • Stage 16: PAPER_OUTLINE
  • Stage 17: PAPER_DRAFT
  • Stage 18: PEER_REVIEW
  • Stage 19: PAPER_REVISION

Phase G: Paper Writing

  • Stage 20: QUALITY_GATE [gate]
  • Stage 21: KNOWLEDGE_ARCHIVE
  • Stage 22: EXPORT_PUBLISH
  • Stage 23: CITATION_VERIFY

Hardware Awareness

AutoResearchClaw auto-detects:

  • NVIDIA CUDA (GPU)
  • Apple MPS (M1/M2/M3)
  • CPU-only fallback

Adapts code generation, imports, and experiment scale accordingly.

Quality Features

  • Real Citations: OpenAlex, Semantic Scholar, arXiv - no hallucinated references
  • 4-Layer Verification: arXiv ID → CrossRef DOI → Semantic Scholar → LLM relevance
  • Multi-Agent Debate: Hypothesis generation, result analysis, peer review
  • Self-Healing: NaN/Inf detection, automatic code repair
  • Conference Templates: NeurIPS, ICLR, ICML support

OpenClaw Bridge Integration (Optional)

Enable in config.arc.yaml:

openclaw_bridge:
  use_cron: true          # Scheduled research runs
  use_message: true       # Progress notifications (Discord/Slack/Telegram)
  use_memory: true        # Cross-session knowledge persistence
  use_sessions_spawn: true # Parallel sub-sessions
  use_web_fetch: true     # Live web search during literature review
  use_browser: false      # Browser-based paper collection

MetaClaw Integration (Optional)

For cross-run learning:

metaclaw_bridge:
  enabled: true
  skills_dir: "~/.metaclaw/skills"
  lesson_to_skill:
    enabled: true
    min_severity: "warning"
    max_skills_per_run: 5

Troubleshooting

Installation Issues

# Check Python version
python3 --version  # Requires 3.8+

# Install dependencies
pip install -r requirements.txt

LLM API Errors

  • Verify OPENAI_API_KEY is set
  • Check API endpoint is accessible
  • Fallback models configured correctly

Sandbox Issues

  • Ensure Python path is correct: .venv/bin/python
  • Check allowed imports in config
  • Adjust memory limits if needed

Literature Collection Failures

  • Check internet connectivity
  • Semantic Scholar API key optional (higher rate limits)
  • OpenAlex should work without API key

Advanced Usage

Specify Research Domains

researchclaw run --topic "Your topic" --domains ml,nlp --auto-approve

Target Specific Conference

export:
  target_conference: "neurips_2025"  # neurips_2025 | iclr_2026 | icml_2026

Custom Prompts

prompts:
  custom_file: "custom_prompts.yaml"

Resources

Comparison with Superpowers

  • ResearchClaw: Academic research, literature review, paper writing, experimental validation
  • Superpowers: Software development, TDD, code review, production code

Use ResearchClaw for research/paper generation. Use Superpowers for production software implementation. They complement each other when researching then implementing findings.

Comments

Loading comments...