Data Pipelines

v1.0.0

Deep data pipeline workflow—ingestion, orchestration, idempotency, data quality, SLAs, observability, and lineage. Use when building batch/stream pipelines,...

0· 104·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (deep data pipeline workflow) match the content of SKILL.md: stage-by-stage guidance for ingestion, orchestration, idempotency, quality, SLAs, lineage. There are no unrelated requirements (no env vars, binaries, or config paths).
Instruction Scope
The SKILL.md contains only design and operational guidance for pipelines (six-stage workflow, checklist, tips). It does not instruct the agent to read local files, access credentials, call external endpoints, or perform system operations beyond giving advice.
Install Mechanism
No install spec and no code files — instruction-only. This minimizes write/execute risk; nothing will be downloaded or installed by the skill itself.
Credentials
The skill declares no required environment variables, credentials, or config paths. Its guidance is conceptual and does not ask for secrets or unrelated credentials.
Persistence & Privilege
Skill is user-invocable and not always-enabled; it does not request elevated persistence or modify other skills. Autonomous invocation is allowed by platform default but is not combined with other concerning privileges here.
Assessment
This skill is high-level documentation for designing and operating data pipelines and appears internally consistent. Because it's instruction-only and requests no credentials, it carries low direct risk. Before you use it in an agent that can act autonomously, consider: (1) do not provision cloud/database credentials to the agent unless you want it to run pipeline actions; (2) if you combine this with other skills (etl connectors, cloud deployers), review those skills for credential requests and install behaviors; and (3) treat the guidance as advisory — it won't execute code itself, so verify any automated playbooks you create from it before running against production. If you want higher confidence, ask the publisher for a homepage or source repo to confirm provenance.

Like a lobster shell, security has layers — review code before you run it.

latestvk97e63fvraa6es077cg9c3d8n983qng3
104downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

Data Pipelines

Pipelines fail on silent schema drift, partial writes, and unclear ownership. Design for at-least-once delivery, idempotent sinks, and observable stages.

When to Offer This Workflow

Trigger conditions:

  • Batch or streaming ingestion (Kafka, Fivetran, Airflow, Dagster, Spark, etc.)
  • Late data, backfills, or schema changes breaking jobs
  • SLA misses on freshness or row counts

Initial offer:

Use six stages: (1) requirements & SLAs, (2) source contracts, (3) transforms & idempotency, (4) orchestration & dependencies, (5) quality & monitoring, (6) lineage & operations). Confirm batch vs stream and cloud stack.


Stage 1: Requirements & SLAs

Goal: Freshness (latency), completeness expectations, cost ceiling, failure tolerance (quarantine vs stop-the-line).

Exit condition: SLA table: pipeline → metric → threshold.


Stage 2: Source Contracts

Goal: Schema versioning; CDC vs snapshot pulls; API rate limits.

Practices

  • Raw landing zone immutable; curated layers downstream

Stage 3: Transforms & Idempotency

Goal: Deterministic transforms; upsert keys; partition strategy for rewinds.

Practices

  • Watermark progress for incremental loads

Stage 4: Orchestration & Dependencies

Goal: Clear DAG; retry policy; backfill without double counting; SLA miss alerts.


Stage 5: Quality & Monitoring

Goal: Data quality checks (null spikes, row bounds, referential checks); metrics on lag, duration, error rate.


Stage 6: Lineage & Operations

Goal: Column-level lineage where valuable; on-call runbook; ownership per pipeline.


Final Review Checklist

  • SLAs and failure policy explicit
  • Source contracts and schema evolution path
  • Idempotent writes and checkpointing
  • Orchestration with retries and safe backfill
  • Data quality checks and alerts
  • Lineage and ownership documented

Tips for Effective Guidance

  • Separate compute from storage cost awareness for large shuffles.
  • Pair with etl-design for batch patterns and message-queues for streaming handoffs.

Handling Deviations

  • Single-script pipelines: still document inputs, outputs, and schedule.

Comments

Loading comments...