Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Vibe Research

v1.0.0

Conduct AI-led research with autonomous literature review, hypothesis generation, analysis, and synthesis while human provides vision.

0· 52·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for jirboy/vibe-research-cn.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Vibe Research" (jirboy/vibe-research-cn) from ClawHub.
Skill page: https://clawhub.ai/jirboy/vibe-research-cn
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install vibe-research-cn

ClawHub CLI

Package manager switcher

npx clawhub@latest install vibe-research-cn
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's name, description, and instructions consistently describe an autonomous literature-review and analysis pipeline. Asking the agent to run web_search/web_fetch and assemble reproducibility artifacts is appropriate for the stated purpose. However, README and other files claim 'No external dependencies' and 'Works offline,' which contradicts the repeated use of network fetch tools and 100+ source retrieval—this is an important incoherence to resolve.
!
Instruction Scope
SKILL.md, pipeline.md, example.md and quickref instruct the agent to perform broad web_search and web_fetch cycles (count=20, 100+ sources minimum), to 'pull additional sources' proactively, and to 'execute ALL cycles' autonomously. The instructions do not tell the agent to read unrelated local files or environment secrets, which is good, but they grant broad discretion to fetch and aggregate large amounts of external content—this is expected for research but increases privacy and data-exfiltration risk if the agent is given access to sensitive inputs (e.g., PHI). The offline/no-external-dependency claim in README conflicts with these explicit network operations.
Install Mechanism
Instruction-only skill with no install spec, no binaries, and no code files. No on-disk installation or archive downloads are requested, which is low-risk from an install vector perspective.
Credentials
The skill declares no required environment variables, no credentials, and no config paths. That aligns with the skill's claimed behavior (uses platform search/fetch tools). There are no obvious requests for unrelated secrets or platform tokens.
Persistence & Privilege
always:false and default autonomous invocation are set. The skill does not request permanent presence or elevated privileges, nor does it instruct modifying other skills or global agent settings.
What to consider before installing
This skill appears to be a legitimate autonomous research workflow, but there are documentation contradictions and operational risks you should address before installing: - Confirm network requirements: the README claims 'Works offline' and 'No external dependencies' but the instructions repeatedly call web_search/web_fetch and mandate fetching many sources. Ask the skill author or maintainer whether the agent will actually perform online searches and whether it can function without network access. - Data-sensitivity check: avoid giving the agent any sensitive inputs (PHI, proprietary documents) unless you fully trust how the platform's web_search/web_fetch tools handle and store fetched data; the skill is designed to proactively pull external sources which could cause unintended data transmission. - Human checkpoints: enforce the three mandatory stop points in practice (clarification, plan approval, final report) and consider adding explicit confirmation steps before starting large automated fetch/analysis cycles. - Reproducibility/storage: clarify where the 'reproducibility package' and fetched sources are stored, how long they are retained, and who can access them. - Rate/resource controls: because the skill instructs many web searches and fetches, limit or monitor the number of cycles and network requests to avoid unexpected bandwidth, cost, or scraping of third-party sites. - Validate citations and provenance in outputs: the skill mandates APA citations and cross-verification—spot-check early outputs for proper sourcing and accuracy. If the author can explain/resolve the offline/no-dependency claims and confirm safe handling of fetched data, the inconsistencies would be addressed. Until then, proceed cautiously and avoid exposing sensitive data to the skill.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🔬 Clawdis
OSLinux · macOS · Windows
latestvk97190kxp4zp7sdths84exnw1n851w83
52downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0
Linux, macOS, Windows

When to Use

User has a research question or knowledge gap. Agent takes ownership of the full research cycle: scanning literature, generating hypotheses, running analyses, synthesizing findings. Human provides direction and oversight, AI executes.

Quick Reference

TopicFile
Research pipelinepipeline.md
Risk mitigationrisks.md

Core Concept

Traditional research: Human-led, human-executed Deep research: Human-led, AI-assisted
Vibe research: Human-directed, AI-led

The human sets the question and validates outputs. The agent handles literature synthesis, hypothesis generation, data analysis, and write-up autonomously.

Core Rules

1. Full-Cycle Ownership

Agent executes the complete pipeline:

  1. Gap identification — What's unknown or contested?
  2. Literature synthesis — Scan, summarize, cross-reference sources
  3. Hypothesis generation — Propose testable claims
  4. Analysis design — Define methodology
  5. Execution — Run analyses, gather data
  6. Synthesis — Write findings with citations

2. Vision from Human, Execution from Agent

  • Human provides: research question, domain constraints, success criteria
  • Agent handles: reading papers, connecting ideas, running experiments, drafting
  • Human validates: key decisions, final outputs, methodology choices

3. Transparent Reasoning

  • Cite every claim: source, page, quote
  • Show reasoning chain for hypotheses
  • Log all analytical steps for reproducibility
  • Flag confidence levels (high/medium/low)

4. Proactive Gap Detection

Don't wait for instructions. When analyzing a topic:

  • Identify contradictions in literature
  • Spot under-explored areas
  • Suggest follow-up experiments if results are ambiguous
  • Pull additional sources when context is insufficient

5. Hallucination Prevention

  • Only claim what sources support
  • Distinguish: "Source X says..." vs "I infer..."
  • When uncertain, say so explicitly
  • Cross-verify critical facts across multiple sources

Vibe Research Traps

  • Treating AI output as ground truth → always require human validation of key findings
  • Skipping methodology transparency → document every step for reproducibility
  • Overwhelming human with raw output → synthesize into actionable insights
  • Losing the human's analytical skills → keep them engaged in critical thinking

Comments

Loading comments...