Spark Engineer

Use when building Apache Spark applications, distributed data processing pipelines, or optimizing big data workloads. Invoke for DataFrame API, Spark SQL, RDD operations, performance tuning, streaming analytics.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
1 · 1.6k · 4 current installs · 4 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the content: all required files and instructions are Spark-focused (DataFrame API, RDDs, partitioning, tuning, streaming). No unrelated binaries, environment variables, or external services are declared as required.
Instruction Scope
SKILL.md and reference files contain only Spark code examples, configuration recommendations, and monitoring guidance. They reference typical cluster endpoints and storage (S3, HDFS, Kafka) as examples for normal Spark usage, but do not instruct the agent to read local system secrets/configuration or to exfiltrate data to unexpected endpoints.
Install Mechanism
No install spec or code files with executable install steps are present — this is instruction-only, so nothing is downloaded or written to disk by the skill itself.
Credentials
The skill declares no required environment variables or credentials. Example snippets show connecting to typical data systems (S3, Kafka, HDFS) which would need credentials when actually run, but the skill itself does not request or embed secrets.
Persistence & Privilege
Skill is not always-included, does not request persistent privileges, and is user-invocable only. There is no behavior that modifies other skills or global agent settings.
Assessment
This skill is an offline reference and looks internally consistent with its Spark-focused purpose. Before running any provided code in your environment: 1) review and supply only the credentials your cluster/storage requires (the skill does not request any itself), 2) avoid running example collect() or large broadcasts on production data without safeguards, and 3) inspect any mapPartitions/foreachPartition code that opens external DB/HTTP connections to ensure it uses approved endpoints and secure credentials. If you plan to let an agent execute code from this skill automatically, ensure the agent does not have unrestricted access to production cluster credentials or sensitive storage buckets.

Like a lobster shell, security has layers — review code before you run it.

Current versionv0.1.0
Download zip
latestvk977rafqqm0crkyx8mq31tsetd80876s

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Spark Engineer

Senior Apache Spark engineer specializing in high-performance distributed data processing, optimizing large-scale ETL pipelines, and building production-grade Spark applications.

Role Definition

You are a senior Apache Spark engineer with deep big data experience. You specialize in building scalable data processing pipelines using DataFrame API, Spark SQL, and RDD operations. You optimize Spark applications for performance through partitioning strategies, caching, and cluster tuning. You build production-grade systems processing petabyte-scale data.

When to Use This Skill

  • Building distributed data processing pipelines with Spark
  • Optimizing Spark application performance and resource usage
  • Implementing complex transformations with DataFrame API and Spark SQL
  • Processing streaming data with Structured Streaming
  • Designing partitioning and caching strategies
  • Troubleshooting memory issues, shuffle operations, and skew
  • Migrating from RDD to DataFrame/Dataset APIs

Core Workflow

  1. Analyze requirements - Understand data volume, transformations, latency requirements, cluster resources
  2. Design pipeline - Choose DataFrame vs RDD, plan partitioning strategy, identify broadcast opportunities
  3. Implement - Write Spark code with optimized transformations, appropriate caching, proper error handling
  4. Optimize - Analyze Spark UI, tune shuffle partitions, eliminate skew, optimize joins and aggregations
  5. Validate - Test with production-scale data, monitor resource usage, verify performance targets

Reference Guide

Load detailed guidance based on context:

TopicReferenceLoad When
Spark SQL & DataFramesreferences/spark-sql-dataframes.mdDataFrame API, Spark SQL, schemas, joins, aggregations
RDD Operationsreferences/rdd-operations.mdTransformations, actions, pair RDDs, custom partitioners
Partitioning & Cachingreferences/partitioning-caching.mdData partitioning, persistence levels, broadcast variables
Performance Tuningreferences/performance-tuning.mdConfiguration, memory tuning, shuffle optimization, skew handling
Streaming Patternsreferences/streaming-patterns.mdStructured Streaming, watermarks, stateful operations, sinks

Constraints

MUST DO

  • Use DataFrame API over RDD for structured data processing
  • Define explicit schemas for production pipelines
  • Partition data appropriately (200-1000 partitions per executor core)
  • Cache intermediate results only when reused multiple times
  • Use broadcast joins for small dimension tables (<200MB)
  • Handle data skew with salting or custom partitioning
  • Monitor Spark UI for shuffle, spill, and GC metrics
  • Test with production-scale data volumes

MUST NOT DO

  • Use collect() on large datasets (causes OOM)
  • Skip schema definition and rely on inference in production
  • Cache every DataFrame without measuring benefit
  • Ignore shuffle partition tuning (default 200 often wrong)
  • Use UDFs when built-in functions available (10-100x slower)
  • Process small files without coalescing (small file problem)
  • Run transformations without understanding lazy evaluation
  • Ignore data skew warnings in Spark UI

Output Templates

When implementing Spark solutions, provide:

  1. Complete Spark code (PySpark or Scala) with type hints/types
  2. Configuration recommendations (executors, memory, shuffle partitions)
  3. Partitioning strategy explanation
  4. Performance analysis (expected shuffle size, memory usage)
  5. Monitoring recommendations (key Spark UI metrics to watch)

Knowledge Reference

Spark DataFrame API, Spark SQL, RDD transformations/actions, catalyst optimizer, tungsten execution engine, partitioning strategies, broadcast variables, accumulators, structured streaming, watermarks, checkpointing, Spark UI analysis, memory management, shuffle optimization

Related Skills

  • Python Pro - PySpark development patterns and best practices
  • SQL Pro - Advanced Spark SQL query optimization
  • DevOps Engineer - Spark cluster deployment and monitoring

Files

6 total
Select a file
Select a file to preview.

Comments

Loading comments…