Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Multi Agent Pipeline

v1.0.1

Generic multi-agent content pipeline — sequential and parallel agent stages with status tracking, error recovery, and progress callbacks. Use when building m...

0· 333·2 current·2 all-time
byNissan Dookeran@nissan
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The SKILL.md describes a generic pipeline framework, but the shipped Python implements concrete HTTP endpoints that call external services (ElevenLabs and Mistral) and persists data to a DB. The metadata claims no required env vars and network outbound is false, yet the code requires ELEVENLABS_API_KEY and MISTRAL_API_KEY and performs external requests. That mismatch is disproportionate to the stated 'framework-only' purpose.
!
Instruction Scope
SKILL.md examples are framework-level, but the code contains full FastAPI routes that read UploadFile, call external STT/TTS/LLM endpoints, and read/write a database and prompt_cache. The instructions/metadata do not call out these concrete behaviors (file uploads, DB writes, external endpoints), giving the agent broader runtime scope than documented.
Install Mechanism
No install spec is provided (instruction-only), but the bundle includes code that depends on third-party packages (httpx, fastapi, pydantic, mistralai, prompt_cache, database). Lack of an install spec means dependencies and their provenance are unspecified, increasing operational risk though not necessarily malicious.
!
Credentials
Declared requirements list no env vars, yet the code reads ELEVENLABS_API_KEY and MISTRAL_API_KEY from environment and raises errors when missing. These unnamed secrets are required for live behavior and are central to the skill's network activity — they should be declared and justified in metadata.
Persistence & Privilege
The skill does not request always:true and is user-invocable. It persists intermediate results to a database and caches audio/images; that is consistent with a pipeline but means installing this skill will give it access to any DB backend the agent exposes. Autonomous invocation plus undeclared secret use increases blast radius — verify deployment isolation and secret scope before enabling.
What to consider before installing
This package contains runnable API routes that call external LLM/TTS/STT services and write to a database, but its metadata omits the required API keys and says outbound networking is disabled. Before installing: (1) do not provide ELEVENLABS_API_KEY or MISTRAL_API_KEY to unknown code without review — confirm why those keys are needed; (2) ask the publisher for an explicit manifest listing required env vars and Python dependencies; (3) inspect the database and prompt_cache modules referenced (they could read/write broader data); (4) run in an isolated environment or container and limit keys to least privilege / scoped test keys; (5) consider requiring the skill author to remove or clearly document concrete external calls if you only wanted a framework. The mismatch between declared metadata and actual code is the primary reason to treat this as suspicious.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🔗 Clawdis
latestvk97b7t7jffkdn42frscn2cxmv582yfm0
333downloads
0stars
2versions
Updated 12h ago
v1.0.1
MIT-0

Multi-Agent Pipeline

A reusable pattern for orchestrating multi-step AI workflows where each stage is handled by a specialist agent. Extracted from a production system that processed 18 stories across 10 languages.

Pipeline Pattern

Input → [Stage 1: Generate] → [Stage 2: Validate] → [Stage 3: Transform] → [Stage 4: Deliver]
              │                      │                       │                      │
         Story Writer           Guardrails              Narrator              Storage
         (sequential)           (parallel ok)           (parallel ok)         (sequential)

Core Concepts

Stages: Named processing steps, each with an agent function, input/output schema, and error handler.

Sequential vs Parallel: Some stages must run in order (generate before validate). Others can run in parallel (narrate + generate SFX simultaneously).

Progress Callbacks: Each stage reports status for UI updates. The pipeline visualization shows 9 agent nodes lighting up sequentially.

Error Recovery: Failed stages can retry with backoff, skip with defaults, or halt the pipeline.

Caching: Integrate with prompt-cache skill to skip stages that have already produced identical output.

Quick Start

from pipeline import Pipeline, Stage

async def generate_story(input_data):
    # Call your LLM here
    return {"story": "Once upon a time..."}

async def validate_content(input_data):
    # Check guardrails
    return {"valid": True, "story": input_data["story"]}

async def narrate(input_data):
    # Call TTS API
    return {"audio": b"..."}

pipeline = Pipeline(stages=[
    Stage("generate", generate_story, parallel=False),
    Stage("validate", validate_content, parallel=False),
    Stage("narrate", narrate, parallel=True),
])

result = await pipeline.run({"prompt": "A bedtime story about clouds"})

Status Tracking

The pipeline emits status updates suitable for real-time UI:

pipeline = Pipeline(
    stages=[...],
    on_status=lambda stage, status: print(f"{stage}: {status}")
)
# Output:
# generate: started
# generate: completed (2.3s)
# validate: started
# validate: completed (0.1s)
# narrate: started
# narrate: completed (4.7s)

Lessons from Production

  • Pre-cache demo content — never rely on live API calls during presentations
  • Parallel stages save wall-clock time but increase API concurrency — respect rate limits
  • Status callbacks should be non-blocking — don't let UI updates slow the pipeline
  • Error in stage N should not lose stages 1..N-1 output — persist intermediate results

Files

  • scripts/pipeline.py — Generic pipeline implementation with stages, parallelism, and callbacks

Security Notes

This skill uses patterns that may trigger automated security scanners:

  • base64: Used for encoding audio/binary data in API responses (standard practice for media APIs)
  • UploadFile: FastAPI's built-in file upload parameter for STT/voice isolation endpoints
  • "system prompt": Refers to configuring agent instructions, not prompt injection

Comments

Loading comments...