Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

MARL — Multi-stage Reasoning Middleware

Multi-stage multi-agent reasoning middleware that reduces LLM hallucination by 70%+. 9 specialized emergence engines for invention, creative, pharma, genomic...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
6 · 167 · 0 current installs · 0 all-time installs
byVIDRAFT@Cutechicken99
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The SKILL.md describes a local multi-stage middleware that sits between the agent and any LLM — that purpose matches the configuration examples (setting baseURL to localhost). However the registry metadata listed 'source: unknown / homepage: none' while the SKILL.md includes links to PyPI, GitHub, Docker Hub and a website; this mismatch is noteworthy and should be verified. Claim that core engine is 'compiled binaries' is plausible but not visible in the skill bundle (instruction-only).
Instruction Scope
Instructions are limited and focused: they tell the user to run MARL locally (docker/pip/Space) and point OpenClaw to a local baseURL. The SKILL.md does not instruct the agent to read unrelated files or environment vars. However it explicitly states 'your data never leaves your infrastructure' while also saying MARL will make API calls to the chosen LLM — if that chosen LLM is a cloud service, user data will leave the host. That is a misleading privacy claim and an operational ambiguity the user should understand.
!
Install Mechanism
The registry bundle contains no install spec (instruction-only), but the README recommends running third-party artifacts (docker image vidraft/marl, pip package 'marl-middleware', and a HuggingFace Space). Those external artifacts may execute arbitrary code; the skill package provides no verification or hashes. Because installing/running the Docker image or pip package is how the middleware is actually deployed, the user should verify the Docker Hub/PyPI/GitHub releases and their provenance before running.
Credentials
The skill declares no required env vars or credentials, and SKILL.md does not request secrets. That is proportionate for an instruction-only skill; note though that the MARL service itself (outside this skill) will likely require API keys to call external LLMs — those credentials are not requested here but are necessary for operation if you use cloud LLMs.
Persistence & Privilege
The skill is not always-enabled and does not request elevated platform privileges. It's user-invocable and does not modify other skills or system-wide settings as presented.
What to consider before installing
This skill is an instruction-only wrapper that directs you to run a third-party MARL service (Docker image, pip package, or HuggingFace Space). Before installing or routing agent traffic through it, verify the upstream artifacts (Docker Hub image, PyPI package, GitHub repo and releases) and confirm they come from the claimed publisher. Don't assume 'data never leaves your infrastructure' — if you configure MARL to use cloud LLMs, your prompts/results will be sent to those providers. Run the Docker/pip artifacts in an isolated environment (container/VM) first, inspect the source code on GitHub, check PyPI/Docker release signatures or hashes if available, and review the service's configuration for where it sends model queries (local vs. cloud). If you plan to use sensitive domains (pharma, genomics, chemistry), treat outputs and the service itself as higher-risk and perform additional review/auditing before production use.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
Plugin bundle (nix)
Skill pack · CLI binary · Config
SKILL.mdCLIConfig
Config requirements
emergencevk976e9ypvdhzfhtyn9hy1cbmhd82kf6qhallucinationvk976e9ypvdhzfhtyn9hy1cbmhd82kf6qlatestvk976e9ypvdhzfhtyn9hy1cbmhd82kf6qllmvk976e9ypvdhzfhtyn9hy1cbmhd82kf6qmetacognitionvk976e9ypvdhzfhtyn9hy1cbmhd82kf6qmiddlewarevk976e9ypvdhzfhtyn9hy1cbmhd82kf6qmulti-agentvk976e9ypvdhzfhtyn9hy1cbmhd82kf6qreasoningvk976e9ypvdhzfhtyn9hy1cbmhd82kf6q

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Config example

Starter config for this plugin bundle.

llm:
  baseURL: "http://localhost:8080/v1"
  model: "gpt-5.4::create"

SKILL.md

MARL Enhance — Brain Upgrade for Your Agent

The 3rd approach after fine-tuning & RAG. MARL restructures how LLMs reason at runtime — not their weights. One line to integrate, 70%+ hallucination reduction, 9 domain-specific emergence engines.

PyPI GitHub Demo FINAL Bench

What It Does

Before MARL: Your agent calls the LLM once → gets an answer (might hallucinate).

After MARL: Your agent calls MARL → MARL runs a multi-stage expert pipeline → hypothesis, solving, auditing, adversarial verification, synthesis → returns a deeply verified answer.

Your Agent → MARL → Multi-stage Pipeline → Any LLM → Verified Answer

Results: 70%+ hallucination reduction · 94.8% of improvement from self-correction · Verified on FINAL Bench (HuggingFace Global Top 5 dataset).

Setup

Option A: Docker (Recommended — all platforms)

docker run -p 8080:8080 vidraft/marl

Option B: pip (Linux x86_64)

pip install marl-middleware
python -m marl serve --port 8080

Option C: HuggingFace Space (No install — try instantly)

Use https://huggingface.co/spaces/VIDraft/MARL directly in your browser.

Connect to OpenClaw

Set your config.json:

{
  "llm": {
    "baseURL": "http://localhost:8080/v1",
    "model": "gpt-5.4"
  }
}

That's it. Every LLM call now passes through MARL's multi-stage reasoning pipeline.

9 Emergence Modes

Switch modes by appending ::mode to any model name:

model valueEngineWhat it does
gpt-5.4🔬 InsightDefault — fact-check, strategy, deep analysis
gpt-5.4::invent🔧 InventPatent-level invention via TRIZ + bio-inspired + contradiction resolution
gpt-5.4::create✨ CreateCliché inversion, paradox, genre fusion, sensory collision
gpt-5.4::recipe🍳 RecipeCulinary emergence with taste chemistry validation
gpt-5.4::pharma💊 PharmaDrug repositioning, mechanism crossing, multi-target design
gpt-5.4::genomics🧬 GenomicsPathway crosstalk, synthetic lethality, phenotype bridging
gpt-5.4::chemistry🧪 ChemistryContradictory properties, biomimicry, waste-to-value
gpt-5.4::ecology🌍 EcologyConservation transfer, threat inversion, service stacking
gpt-5.4::law⚖️ LawCross-jurisdiction transplant, tech-law collision resolution
gpt-5.4::document📄 DocumentMetacognitive report and document generation

Replace gpt-5.4 with any model — Claude, Gemini, DeepSeek, Llama, etc.

Example: Switch to Pharma mode

{
  "llm": {
    "baseURL": "http://localhost:8080/v1",
    "model": "gpt-5.4::pharma"
  }
}

Then chat: "Find drug repositioning candidates for Alzheimer's using immune checkpoint mechanisms"

Example: Creative ideation

{
  "llm": {
    "model": "claude-sonnet::create"
  }
}

Then chat: "Generate 10 movie loglines that have never existed before"

How It Works

┌─ OpenClaw ────────────────────────────────────┐
│  "Analyze this complex question"               │
└──────────────┬─────────────────────────────────┘
               │ HTTP (OpenAI API format)
               ▼
┌─ MARL Middleware ─────────────────────────────┐
│  Multi-stage Multi-agent Reasoning Pipeline    │
│  9 Emergence Engines · 70%+ Hallucination ↓   │
└──────────────┬─────────────────────────────────┘
               │ API calls to your chosen LLM
               ▼
┌─ Any LLM ─────────────────────────────────────┐
│  GPT-5.4 · Claude · Gemini · DeepSeek · Llama │
└────────────────────────────────────────────────┘

MARL works with every LLM that supports OpenAI API format. It runs locally on your machine — your data never leaves your infrastructure.

Works With Any LLM

  • OpenAI (GPT-5.4, GPT-5.2, GPT-4.1, o4-mini)
  • Anthropic (Claude Opus 4.6, Sonnet 4.6)
  • Google (Gemini 3.1 Pro, Gemini 3 Flash)
  • DeepSeek (V3, R1, R2)
  • xAI (Grok-4, Grok-3)
  • Groq (gpt-oss-120b, Llama 4 — free)
  • Ollama (any local model)
  • Any OpenAI-compatible endpoint

Links

About

Built by VIDRAFT (Seoul AI Hub). MARL's core engine is delivered as compiled binaries to protect proprietary technology. Interface code is open for integration.

Apache 2.0 · Contact: arxivgpt@gmail.com

Files

1 total
Select a file
Select a file to preview.

Comments

Loading comments…