Skylv Data Pipeline Builder

WarnAudited by ClawScan on May 10, 2026.

Overview

The skill fits its data-pipeline purpose, but it describes broad access to databases, APIs, cloud storage, credentials, automatic connector installs, and scheduled runs without enough scoping or setup detail.

Review this skill carefully before installing. It may be appropriate for controlled ETL work, but only use narrow, revocable credentials; avoid production/admin access at first; require validation before writes; and do not allow automatic connector installation without seeing exactly what will be installed.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

A vague or mistaken instruction could copy, transform, or load data into the wrong database, warehouse, file, or API, and scheduled jobs could repeat the impact.

Why it was flagged

The skill exposes commands that can create, execute, and schedule data movement into mutable destinations. The artifacts do not specify confirmation, rollback, destination safeguards, or default dry-run behavior for high-impact operations.

Skill content
`create pipeline from <src> to <dst>` ... `schedule <pipeline> <when>` ... `run pipeline <name>` ... `Load — To databases, data warehouses, files, APIs`
Recommendation

Use only with explicit source and destination names, least-privilege accounts, test datasets, and a required validation or dry-run step before running or scheduling a pipeline.

What this means

Credentials for databases, APIs, cloud storage, or warehouses could grant broad read/write access to business data if over-scoped or handled unclearly.

Why it was flagged

The skill expects credentials or tokens for many external systems, but the artifacts do not define which credentials are required, how they are scoped, how they are stored, or how they are prevented from being reused outside the intended pipeline.

Skill content
`set source auth: bearer token xxx` ... `Databases: MySQL, PostgreSQL, MongoDB, Redis, SQLite` ... `Cloud Storage: S3, GCS, Azure Blob`
Recommendation

Provide only narrowly scoped, revocable credentials for a specific source and destination, and avoid using production or admin credentials unless the implementation is reviewed.

What this means

Installing unreviewed or unpinned connectors could introduce unexpected code or dependencies into the user's environment.

Why it was flagged

The skill claims connectors will be auto-installed, but the provided artifacts contain no install spec, code, package list, lockfile, source, or version pinning for those connectors.

Skill content
`Source/destination connectors (auto-installed)`
Recommendation

Do not allow automatic connector installation unless the connector source, versions, permissions, and installation commands are shown and approved.

What this means

A scheduled pipeline may continue reading from sources and writing to destinations until it is paused or removed.

Why it was flagged

The skill discloses persistent scheduled execution, which is normal for data pipelines, but it can keep operating after the initial user interaction.

Skill content
`Schedule — Cron-based or event-triggered execution` ... `schedule orders-sync hourly`
Recommendation

Review scheduled jobs regularly, set clear owners and expiration dates, and confirm that `pause pipeline` or equivalent cleanup actually stops future runs.