Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Data Pipeline Builder

v1.0.0

Build, schedule, and monitor ETL pipelines to extract, transform, and load data across databases, APIs, and file systems with error handling and validation.

0· 10·1 current·1 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The description promises connectors to databases, APIs, S3/GCS, scheduling, CDC, and monitoring — capabilities that normally require explicit credentials, endpoint configuration, and connector code. The skill declares no required environment variables, no config paths, and no connector implementations, making its claimed capabilities unexplained.
Instruction Scope
SKILL.md is high-level and only lists example commands (e.g., 'create pipeline from mysql to postgres', 'run etl-job') and runtime dependencies (Node.js, Python). It does not describe how credentials, endpoints, schema, or sensitive data will be provided or protected, nor does it instruct the agent to read or write any particular system paths. The vagueness gives the agent broad discretion at runtime, which is a risk if concrete safeguards are not provided.
Install Mechanism
There is no install spec and no code files; this instruction-only skill does not write files or download artifacts, which reduces supply-chain risk.
!
Credentials
The skill's functionality would normally require multiple credentials (database usernames/passwords, cloud storage keys, API tokens), but requires none and does not list a primary credential. This mismatch is disproportionate and unexplained — either the skill is only a human-facing guide, or it omits critical info about how secrets are handled.
Persistence & Privilege
The skill is not marked always:true and is user-invocable only; it does not request persistent system presence or make configuration changes. Autonomous invocation is allowed (platform default) but not by itself a problem.
What to consider before installing
This skill is a high-level template rather than a concrete connector implementation. Before installing or using it: 1) Ask the publisher for source code or a homepage so you can inspect how connectors handle credentials and network access. 2) Confirm how you are expected to provide credentials (per-run prompts, secure vault, environment variables) and ensure least-privilege credentials are used. 3) If you plan to let the agent run pipelines, test in an isolated environment with limited permissions and audit logs enabled. 4) Prefer skills that declare required env vars/config paths and document their data handling and telemetry; avoid supplying broad credentials until you can review the implementation. If you cannot obtain more detail, treat this skill as an under-specified template and do not grant it access to production secrets or systems.

Like a lobster shell, security has layers — review code before you run it.

latestvk972419tdp90h4x19zb9d5qb4n85d2we
10downloads
0stars
1versions
Updated 3h ago
v1.0.0
MIT-0

data-pipeline-builder

Build and manage ETL/data pipelines for AI agents. Extract, transform, load data between databases, APIs, and file systems.

Overview

A skill for creating and managing data pipelines that move and transform data across different sources and destinations.

Features

  • Source Connectors: Connect to databases, APIs, files, S3, GCS
  • Data Transformation: Apply filters, mappings, aggregations
  • Scheduling: Cron-based or event-triggered pipelines
  • Error Handling: Retry logic, dead letter queues
  • Data Validation: Schema validation, data quality checks
  • Monitoring: Pipeline status, throughput, error rates
  • Incremental Sync: Support for CDC (Change Data Capture)

Commands

Create Pipeline

create pipeline from mysql to postgres

Run Pipeline

run etl-job daily-sales-report

Monitor Pipeline

check pipeline health

Use Cases

  • Database synchronization
  • API data extraction
  • File processing and conversion
  • Data warehousing
  • Report generation
  • Analytics data preparation

Requirements

  • Node.js 18+
  • Python 3.8+ (for data processing)
  • Source/destination connectors as needed

Comments

Loading comments...