Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

senior-data-engineer

v0.1.0

World-class data engineering skill for building scalable data pipelines, ETL/ELT systems, real-time streaming, and data infrastructure. Expertise in Python,...

0· 42·0 current·0 all-time
Security Scan
Capability signals
CryptoCan make purchases
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name, description, SKILL.md and the included scripts (pipeline_orchestrator, kafka_config_generator, data_quality_validator, stream_processor, etc.) are coherent with a senior data engineering skill. However the package declares no required env vars or install steps while the code and docs demonstrate heavy external integrations (Airflow, Snowflake, S3, Kafka, dbt, Great Expectations) that in practice require credentials and third-party libraries. Also the registry metadata/version (0.1.0) differs from SKILL.md version (v2.0.0) and HOW_TO_USE.md (1.0.0), which is inconsistent.
!
Instruction Scope
SKILL.md and the references/templates/docs instruct running scripts that connect to databases, S3, Snowflake, Kafka, Airflow variables, and other systems. The instructions assume availability of credentials and runtime systems but the skill metadata declares no required env vars or config paths. Several example snippets embed credential placeholders (e.g., AWS_KEY_ID/AWS_SECRET_KEY in SQL) and reference Airflow Variable access (context['var']['value']) and Airflow provider operators. The runtime instructions therefore require access to secrets and infrastructure not declared by the skill and could lead users to paste credentials into templates. Additionally, some scripts and examples appear truncated or use Airflow-specific context without clarifying runtime environment requirements.
!
Install Mechanism
This is an instruction-only skill (no install spec). But the docs and scripts list many heavyweight Python dependencies (apache-airflow, pandas, sqlalchemy, boto3, google-cloud-bigquery, snowflake-connector-python, great-expectations, scikit-learn, etc.) with no packaged install instructions in the manifest. That mismatch means a user or agent may try to run scripts without the dependencies being installed, or attempt ad-hoc installs. Absence of a declared, reproducible install mechanism for these dependencies is a red flag for usability and safety.
!
Credentials
The registry lists no required environment variables or credentials, yet the code and examples clearly require database connection strings, cloud credentials, and service tokens (Postgres/Snowflake/S3/Kafka). The skill does not declare a primary credential or required env entries, which is disproportionate given the heavy external access the code expects. This gaps means the agent or user will need to supply secrets at runtime; the skill also contains examples that encourage embedding credentials in code/templates (e.g., Snowflake CREATE STAGE example).
Persistence & Privilege
The skill does not request always:true, does not modify other skills, and has no install steps that alter agent-wide configuration. It is user-invocable and allows autonomous invocation (the platform default), which is expected. No persistent privileges beyond normal execution of included scripts are requested.
What to consider before installing
This package generally matches its stated purpose (data-engineering tooling), but there are several practical and security concerns you should address before installing or running it: - Do not run the scripts in production or with real credentials until you review them fully. The examples show use of DB and cloud credentials (Postgres, Snowflake, AWS, Kafka) but the skill metadata does not declare or protect those secrets. - The repository lacks an install specification for the many third-party Python packages the scripts require. Recreate a controlled virtual environment, pin versions, and audit packages before installing. - Several scripts and CLI entrypoints appear inconsistent/buggy (constructors and method names in the classes do not match how main() invokes them). Expect runtime errors; review and test in a sandbox. - Version/verification inconsistencies (registry version 0.1.0 vs SKILL.md v2.0.0 vs HOW_TO_USE 1.0.0, source unknown but 'verified: true') are suspicious — ask the publisher for provenance or prefer a source-controlled release from a trusted repo. - Review any truncated/omitted files (large stream_processor/streaming_quality_validator files were truncated in the manifest) for network calls or hidden endpoints before use. Recommended actions: run the code in an isolated environment, avoid supplying real credentials (use mock/test accounts), add explicit dependency installation (requirements.txt), and request clarification from the publisher about verification and intended runtime environment. If you need help auditing the remaining truncated files, provide their full contents for review.

Like a lobster shell, security has layers — review code before you run it.

latestvk977b0vr7ceb0kvxsvkbnmqsb184wpqk
42downloads
0stars
1versions
Updated 4d ago
v0.1.0
MIT-0

Senior Data Engineer

Core Capabilities

  • Batch Pipeline Orchestration - Design and implement production-ready ETL/ELT pipelines with Airflow, intelligent dependency resolution, retry logic, and comprehensive monitoring
  • Real-Time Streaming - Build event-driven streaming pipelines with Kafka, Flink, Kinesis, and Spark Streaming with exactly-once semantics and sub-second latency
  • Data Quality Management - Comprehensive batch and streaming data quality validation covering completeness, accuracy, consistency, timeliness, and validity
  • Streaming Quality Monitoring - Track consumer lag, data freshness, schema drift, throughput, and dead letter queue rates for streaming pipelines
  • Performance Optimization - Analyze and optimize pipeline performance with query optimization, Spark tuning, and cost analysis recommendations

Key Workflows

Workflow 1: Build ETL Pipeline

Time: 2-4 hours

Steps:

  1. Design pipeline architecture using Lambda, Kappa, or Medallion pattern
  2. Configure YAML pipeline definition with sources, transformations, targets
  3. Generate Airflow DAG with pipeline_orchestrator.py
  4. Define data quality validation rules
  5. Deploy and configure monitoring/alerting

Expected Output: Production-ready ETL pipeline with 99%+ success rate, automated quality checks, and comprehensive monitoring

Workflow 2: Build Real-Time Streaming Pipeline

Time: 3-5 days

Steps:

  1. Select streaming architecture (Kappa vs Lambda) based on requirements
  2. Configure streaming pipeline YAML (sources, processing, sinks, quality)
  3. Generate Kafka configurations with kafka_config_generator.py
  4. Generate Flink/Spark job scaffolding with stream_processor.py
  5. Deploy and monitor with streaming_quality_validator.py

Expected Output: Streaming pipeline processing 10K+ events/sec with P99 latency < 1s, exactly-once delivery, and real-time quality monitoring

World-class data engineering for production-grade data systems, scalable pipelines, and enterprise data platforms.

Overview

This skill provides comprehensive expertise in data engineering fundamentals through advanced production patterns. From designing medallion architectures to implementing real-time streaming pipelines, it covers the full spectrum of modern data engineering including ETL/ELT design, data quality frameworks, pipeline orchestration, and DataOps practices.

What This Skill Provides:

  • Production-ready pipeline templates (Airflow, Spark, dbt)
  • Comprehensive data quality validation framework
  • Performance optimization and cost analysis tools
  • Data architecture patterns (Lambda, Kappa, Medallion)
  • Complete DataOps CI/CD workflows

Best For:

  • Building scalable data pipelines for enterprise systems
  • Implementing data quality and governance frameworks
  • Optimizing ETL performance and cloud costs
  • Designing modern data architectures (lake, warehouse, lakehouse)
  • Production ML/AI data infrastructure

Quick Start

Pipeline Orchestration

# Generate Airflow DAG from configuration
python scripts/pipeline_orchestrator.py --config pipeline_config.yaml --output dags/

# Validate pipeline configuration
python scripts/pipeline_orchestrator.py --config pipeline_config.yaml --validate

# Use incremental load template
python scripts/pipeline_orchestrator.py --template incremental --output dags/

Data Quality Validation

# Validate CSV file with quality checks
python scripts/data_quality_validator.py --input data/sales.csv --output report.html

# Validate database table with custom rules
python scripts/data_quality_validator.py \
    --connection postgresql://user:pass@host/db \
    --table sales_transactions \
    --rules rules/sales_validation.yaml \
    --threshold 0.95

Performance Optimization

# Analyze pipeline performance and get recommendations
python scripts/etl_performance_optimizer.py \
    --airflow-db postgresql://host/airflow \
    --dag-id sales_etl_pipeline \
    --days 30 \
    --optimize

# Analyze Spark job performance
python scripts/etl_performance_optimizer.py \
    --spark-history-server http://spark-history:18080 \
    --app-id app-20250115-001

Real-Time Streaming

# Validate streaming pipeline configuration
python scripts/stream_processor.py --config streaming_config.yaml --validate

# Generate Kafka topic and client configurations
python scripts/kafka_config_generator.py \
    --topic user-events \
    --partitions 12 \
    --replication 3 \
    --output kafka/topics/

# Generate exactly-once producer configuration
python scripts/kafka_config_generator.py \
    --producer \
    --profile exactly-once \
    --output kafka/producer.properties

# Generate Flink job scaffolding
python scripts/stream_processor.py \
    --config streaming_config.yaml \
    --mode flink \
    --generate \
    --output flink-jobs/

# Monitor streaming quality
python scripts/streaming_quality_validator.py \
    --lag --consumer-group events-processor --threshold 10000 \
    --freshness --topic processed-events --max-latency-ms 5000 \
    --output streaming-health-report.html

Core Workflows

1. Building Production Data Pipelines

Steps:

  1. Design Architecture: Choose pattern (Lambda, Kappa, Medallion) based on requirements
  2. Configure Pipeline: Create YAML configuration with sources, transformations, targets
  3. Generate DAG: python scripts/pipeline_orchestrator.py --config config.yaml
  4. Add Quality Checks: Define validation rules for data quality
  5. Deploy & Monitor: Deploy to Airflow, configure alerts, track metrics

Pipeline Patterns: See frameworks.md for Lambda Architecture, Kappa Architecture, Medallion Architecture (Bronze/Silver/Gold), and Microservices Data patterns.

Templates: See templates.md for complete Airflow DAG templates, Spark job templates, dbt models, and Docker configurations.

2. Data Quality Management

Steps:

  1. Define Rules: Create validation rules covering completeness, accuracy, consistency
  2. Run Validation: python scripts/data_quality_validator.py --rules rules.yaml
  3. Review Results: Analyze quality scores and failed checks
  4. Integrate CI/CD: Add validation to pipeline deployment process
  5. Monitor Trends: Track quality scores over time

Quality Framework: See frameworks.md for complete Data Quality Framework covering all dimensions (completeness, accuracy, consistency, timeliness, validity).

Validation Templates: See templates.md for validation configuration examples and Python API usage.

3. Data Modeling & Transformation

Steps:

  1. Choose Modeling Approach: Dimensional (Kimball), Data Vault 2.0, or One Big Table
  2. Design Schema: Define fact tables, dimensions, and relationships
  3. Implement with dbt: Create staging, intermediate, and mart models
  4. Handle SCD: Implement slowly changing dimension logic (Type 1/2/3)
  5. Test & Deploy: Run dbt tests, generate documentation, deploy

Modeling Patterns: See frameworks.md for Dimensional Modeling (Kimball), Data Vault 2.0, One Big Table (OBT), and SCD implementations.

dbt Templates: See templates.md for complete dbt model templates including staging, intermediate, fact tables, and SCD Type 2 logic.

4. Performance Optimization

Steps:

  1. Profile Pipeline: Run performance analyzer on recent pipeline executions
  2. Identify Bottlenecks: Review execution time breakdown and slow tasks
  3. Apply Optimizations: Implement recommendations (partitioning, indexing, batching)
  4. Tune Spark Jobs: Optimize memory, parallelism, and shuffle settings
  5. Measure Impact: Compare before/after metrics, track cost savings

Optimization Strategies: See frameworks.md for performance best practices including partitioning strategies, query optimization, and Spark tuning.

Analysis Tools: See tools.md for complete documentation on etl_performance_optimizer.py with query analysis and Spark tuning.

5. Building Real-Time Streaming Pipelines

Steps:

  1. Architecture Selection: Choose Kappa (streaming-only) or Lambda (batch + streaming) architecture
  2. Configure Pipeline: Create YAML config with sources, processing engine, sinks, quality thresholds
  3. Generate Kafka Configs: python scripts/kafka_config_generator.py --topic events --partitions 12
  4. Generate Job Scaffolding: python scripts/stream_processor.py --mode flink --generate
  5. Deploy Infrastructure: Use Docker Compose for local dev, Kubernetes for production
  6. Monitor Quality: python scripts/streaming_quality_validator.py --lag --freshness --throughput

Streaming Patterns: See frameworks.md for stateful processing, stream joins, windowing, exactly-once semantics, and CDC patterns.

Templates: See templates.md for Flink DataStream jobs, Kafka Streams applications, PyFlink templates, and Docker Compose configurations.

Python Tools

pipeline_orchestrator.py

Automated Airflow DAG generation with intelligent dependency resolution and monitoring.

Key Features:

  • Generate production-ready DAGs from YAML configuration
  • Automatic task dependency resolution
  • Built-in retry logic and error handling
  • Multi-source support (PostgreSQL, S3, BigQuery, Snowflake)
  • Integrated quality checks and alerting

Usage:

# Basic DAG generation
python scripts/pipeline_orchestrator.py --config pipeline_config.yaml --output dags/

# With validation
python scripts/pipeline_orchestrator.py --config config.yaml --validate

# From template
python scripts/pipeline_orchestrator.py --template incremental --output dags/

Complete Documentation: See tools.md for full configuration options, templates, and integration examples.

data_quality_validator.py

Comprehensive data quality validation framework with automated checks and reporting.

Capabilities:

  • Multi-dimensional validation (completeness, accuracy, consistency, timeliness, validity)
  • Great Expectations integration
  • Custom business rule validation
  • HTML/PDF report generation
  • Anomaly detection
  • Historical trend tracking

Usage:

# Validate with custom rules
python scripts/data_quality_validator.py \
    --input data/sales.csv \
    --rules rules/sales_validation.yaml \
    --output report.html

# Database table validation
python scripts/data_quality_validator.py \
    --connection postgresql://host/db \
    --table sales_transactions \
    --threshold 0.95

Complete Documentation: See tools.md for rule configuration, API usage, and integration patterns.

etl_performance_optimizer.py

Pipeline performance analysis with actionable optimization recommendations.

Capabilities:

  • Airflow DAG execution profiling
  • Bottleneck detection and analysis
  • SQL query optimization suggestions
  • Spark job tuning recommendations
  • Cost analysis and optimization
  • Historical performance trending

Usage:

# Analyze Airflow DAG
python scripts/etl_performance_optimizer.py \
    --airflow-db postgresql://host/airflow \
    --dag-id sales_etl_pipeline \
    --days 30 \
    --optimize

# Spark job analysis
python scripts/etl_performance_optimizer.py \
    --spark-history-server http://spark-history:18080 \
    --app-id app-20250115-001

Complete Documentation: See tools.md for profiling options, optimization strategies, and cost analysis.

stream_processor.py

Streaming pipeline configuration generator and validator for Kafka, Flink, and Kinesis.

Capabilities:

  • Multi-platform support (Kafka, Flink, Kinesis, Spark Streaming)
  • Configuration validation with best practice checks
  • Flink/Spark job scaffolding generation
  • Kafka topic configuration generation
  • Docker Compose for local streaming stacks
  • Exactly-once semantics configuration

Usage:

# Validate configuration
python scripts/stream_processor.py --config streaming_config.yaml --validate

# Generate Kafka configurations
python scripts/stream_processor.py --config streaming_config.yaml --mode kafka --generate

# Generate Flink job scaffolding
python scripts/stream_processor.py --config streaming_config.yaml --mode flink --generate --output flink-jobs/

# Generate Docker Compose for local development
python scripts/stream_processor.py --config streaming_config.yaml --mode docker --generate

Complete Documentation: See tools.md for configuration format, validation checks, and generated outputs.

streaming_quality_validator.py

Real-time streaming data quality monitoring with comprehensive health scoring.

Capabilities:

  • Consumer lag monitoring with thresholds
  • Data freshness validation (P50/P95/P99 latency)
  • Schema drift detection
  • Throughput analysis (events/sec, bytes/sec)
  • Dead letter queue rate monitoring
  • Overall quality scoring with recommendations
  • Prometheus metrics export

Usage:

# Monitor consumer lag
python scripts/streaming_quality_validator.py \
    --lag --consumer-group events-processor --threshold 10000

# Monitor data freshness
python scripts/streaming_quality_validator.py \
    --freshness --topic processed-events --max-latency-ms 5000

# Full quality validation
python scripts/streaming_quality_validator.py \
    --lag --freshness --throughput --dlq \
    --output streaming-health-report.html

Complete Documentation: See tools.md for all monitoring dimensions and integration patterns.

kafka_config_generator.py

Production-grade Kafka configuration generator with performance and security profiles.

Capabilities:

  • Topic configuration (partitions, replication, retention, compaction)
  • Producer profiles (high-throughput, exactly-once, low-latency, ordered)
  • Consumer profiles (exactly-once, high-throughput, batch)
  • Kafka Streams configuration with state store tuning
  • Security configuration (SASL-PLAIN, SASL-SCRAM, mTLS)
  • Kafka Connect source/sink configurations
  • Multiple output formats (properties, YAML, JSON)

Usage:

# Generate topic configuration
python scripts/kafka_config_generator.py \
    --topic user-events --partitions 12 --replication 3 --retention-hours 168

# Generate exactly-once producer
python scripts/kafka_config_generator.py \
    --producer --profile exactly-once --transactional-id producer-001

# Generate Kafka Streams config
python scripts/kafka_config_generator.py \
    --streams --application-id events-processor --exactly-once

Complete Documentation: See tools.md for all profiles, security options, and Connect configurations.

Reference Documentation

Frameworks (frameworks.md)

Comprehensive data engineering frameworks and patterns:

  • Architecture Patterns: Lambda, Kappa, Medallion, Microservices data architecture
  • Data Modeling: Dimensional (Kimball), Data Vault 2.0, One Big Table
  • ETL/ELT Patterns: Full load, incremental load, CDC, SCD, idempotent pipelines
  • Data Quality: Complete framework covering all quality dimensions
  • DataOps: CI/CD for data pipelines, testing strategies, monitoring
  • Orchestration: Airflow DAG patterns, backfill strategies
  • Real-Time Streaming: Stateful processing, stream joins, windowing strategies, exactly-once semantics, event time processing, watermarks, backpressure, Apache Flink patterns, AWS Kinesis patterns, CDC for streaming
  • Governance: Data catalog, lineage tracking, access control

Templates (templates.md)

Production-ready code templates and examples:

  • Airflow DAGs: Complete ETL DAG, incremental load, dynamic task generation
  • Spark Jobs: Batch processing, streaming, optimized configurations
  • dbt Models: Staging, intermediate, fact tables, dimensions with SCD Type 2
  • SQL Patterns: Incremental merge (upsert), deduplication, date spine, window functions
  • Python Pipelines: Data quality validation class, retry decorators, error handling
  • Real-Time Streaming: Apache Flink DataStream jobs (Java), Kafka Streams applications, PyFlink jobs, AWS Kinesis consumers, Docker Compose for streaming stack
  • Kafka Configs: Producer/consumer properties templates, topic configurations, security configurations
  • Docker: Dockerfiles for data pipelines, Docker Compose for local development including streaming stack (Kafka, Flink, Schema Registry)
  • Configuration: dbt project config, Spark configuration, Airflow variables, streaming pipeline YAML
  • Testing: pytest fixtures, integration tests, data quality tests

Tools (tools.md)

Python automation tool documentation:

  • pipeline_orchestrator.py: Complete usage guide, configuration format, DAG templates
  • data_quality_validator.py: Validation rules, dimension checks, Great Expectations integration
  • etl_performance_optimizer.py: Performance analysis, query optimization, Spark tuning
  • stream_processor.py: Streaming pipeline configuration, validation, job scaffolding generation
  • streaming_quality_validator.py: Consumer lag, data freshness, schema drift, throughput monitoring
  • kafka_config_generator.py: Topic, producer, consumer, Kafka Streams, and Connect configurations
  • Integration Patterns: Airflow, dbt, CI/CD, monitoring systems, Prometheus
  • Best Practices: Configuration management, error handling, performance, monitoring, streaming quality

Tech Stack

Core Technologies:

  • Languages: Python 3.8+, SQL, Scala (Spark), Java (Flink)
  • Orchestration: Apache Airflow, Prefect, Dagster
  • Batch Processing: Apache Spark, dbt, Pandas
  • Stream Processing: Apache Kafka, Apache Flink, Kafka Streams, Spark Structured Streaming, AWS Kinesis
  • Storage: PostgreSQL, BigQuery, Snowflake, Redshift, S3, GCS
  • Schema Management: Confluent Schema Registry, AWS Glue Schema Registry
  • Containerization: Docker, Kubernetes
  • Monitoring: Datadog, Prometheus, Grafana, Kafka UI

Data Platforms:

  • Cloud Data Warehouses: Snowflake, BigQuery, Redshift
  • Data Lakes: Delta Lake, Apache Iceberg, Apache Hudi
  • Streaming Platforms: Apache Kafka, AWS Kinesis, Google Pub/Sub, Azure Event Hubs
  • Stream Processing Engines: Apache Flink, Kafka Streams, Spark Structured Streaming
  • Workflow: Airflow, Prefect, Dagster

Integration Points

This skill integrates with:

  • Orchestration: Airflow, Prefect, Dagster for workflow management
  • Transformation: dbt for SQL transformations and testing
  • Quality: Great Expectations for data validation
  • Monitoring: Datadog, Prometheus for pipeline monitoring
  • BI Tools: Looker, Tableau, Power BI for analytics
  • ML Platforms: MLflow, Kubeflow for ML pipeline integration
  • Version Control: Git for pipeline code and configuration

See tools.md for detailed integration patterns and examples.

Best Practices

Pipeline Design:

  1. Idempotent operations for safe reruns
  2. Incremental processing where possible
  3. Clear data lineage and documentation
  4. Comprehensive error handling
  5. Automated recovery mechanisms

Data Quality:

  1. Define quality rules early
  2. Validate at every pipeline stage
  3. Automate quality monitoring
  4. Track quality trends over time
  5. Block bad data from downstream

Performance:

  1. Partition large tables by date/region
  2. Use columnar formats (Parquet, ORC)
  3. Leverage predicate pushdown
  4. Optimize for your query patterns
  5. Monitor and tune regularly

Operations:

  1. Version control everything
  2. Automate testing and deployment
  3. Implement comprehensive monitoring
  4. Document runbooks for incidents
  5. Regular performance reviews

Performance Targets

Batch Pipeline Execution:

  • P50 latency: < 5 minutes (hourly pipelines)
  • P95 latency: < 15 minutes
  • Success rate: > 99%
  • Data freshness: < 1 hour behind source

Streaming Pipeline Execution:

  • Throughput: 10K+ events/second sustained
  • End-to-end latency: P99 < 1 second
  • Consumer lag: < 10K records behind
  • Exactly-once delivery: Zero duplicates or losses

Data Quality (Batch):

  • Quality score: > 95%
  • Completeness: > 99%
  • Timeliness: < 2 hours data lag
  • Zero critical failures

Streaming Quality:

  • Data freshness: P95 < 5 minutes from event generation
  • Late data rate: < 5% outside watermark window
  • Dead letter queue rate: < 1%
  • Schema compatibility: 100% backward/forward compatible changes

Cost Efficiency:

  • Cost per GB processed: < $0.10
  • Cloud cost trend: Stable or decreasing
  • Resource utilization: > 70%

Resources


Version: 2.0.0 Last Updated: December 16, 2025 Documentation Structure: Progressive disclosure with comprehensive references Streaming Enhancement: Task #8 - Real-time streaming capabilities added

Comments

Loading comments...