Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Hadoop

v1.0.0

Manage Hadoop clusters with HDFS operations, YARN job tuning, and distributed processing diagnostics.

0· 427·0 current·0 all-time
byIván@ivangdavila
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name, description, and required binaries (hdfs, yarn, hadoop) match the documented capabilities (HDFS operations, YARN job management, diagnostics). No unrelated credentials or tools are requested.
Instruction Scope
Instructions explicitly tell the agent to run administrative hdfs/yarn commands and to read cluster logs/configs (e.g., /var/log, /etc/hadoop/conf). This is appropriate for cluster diagnostics but means the agent will read system-level files and may suggest destructive admin actions (the docs state destructive commands require explicit user confirmation).
Install Mechanism
No install spec or external downloads—instruction-only skill. Nothing is written to disk by an installer other than the skill's own memory files under ~/hadoop/, which is documented.
Credentials
The skill requests no environment variables or credentials. It documents that credentials (Kerberos keytabs) should be managed separately and that it does not store credentials—this is proportionate to its admin role.
Persistence & Privilege
The skill persists state under ~/hadoop/ (memory.md and cluster notes) which is reasonable. It is not always-included and uses normal autonomous invocation defaults; because it can run admin commands, users must confirm destructive actions when prompted.
Assessment
This skill appears coherent for Hadoop administration. If you install it: ensure the agent runs from a host that legitimately has hdfs/yarn/hadoop on PATH and appropriate cluster network access; expect the skill to read /var/log and /etc/hadoop/conf for diagnostics and to create ~/hadoop/ memory files. The docs state that destructive commands (rm -rf, forcible safe-mode leave, datanode data removal, killing apps) will require your explicit confirmation—do not approve destructive operations unless you understand the consequences. Because the skill does not request credentials, continue to manage Kerberos keytabs/tickets yourself. If you need stronger assurances, ask the owner for provenance (who maintains this skill) or run the agent in a sandboxed admin node rather than on an arbitrary workstation.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🐘 Clawdis
OSLinux · macOS
Binshdfs, yarn, hadoop
latestvk977tbns8zgrjst70h1mnte2zs81tmhe
427downloads
0stars
1versions
Updated 5h ago
v1.0.0
MIT-0
Linux, macOS

Setup

If ~/hadoop/ doesn't exist or is empty, read setup.md and start the conversation naturally.

When to Use

User works with Hadoop ecosystem (HDFS, YARN, MapReduce, Hive). Agent handles cluster diagnostics, job optimization, storage management, and troubleshooting distributed processing failures.

Architecture

Memory lives in ~/hadoop/. See memory-template.md for structure.

~/hadoop/
├── memory.md        # Cluster configs, common issues, preferences
├── clusters/        # Per-cluster notes and configs
│   └── {name}.md    # Specific cluster context
└── scripts/         # Custom diagnostic scripts

Quick Reference

TopicFile
Setup processsetup.md
Memory templatememory-template.md
HDFS operationshdfs.md
YARN tuningyarn.md
Troubleshootingtroubleshooting.md

Core Rules

1. Verify Cluster State First

Before any operation, check cluster health:

hdfs dfsadmin -report
yarn node -list

Never assume cluster is healthy. A single dead DataNode changes everything.

2. Storage Before Compute

HDFS issues cascade into job failures. Always check:

hdfs dfs -df -h                    # Capacity
hdfs fsck / -files -blocks         # Block health

A job failing with "No space left" is storage, not code.

3. Resource Calculator Awareness

YARN allocates based on configured scheduler. Know which is active:

yarn rmadmin -getServiceState rm1
cat /etc/hadoop/conf/yarn-site.xml | grep scheduler

Default (Capacity) vs Fair scheduler behave very differently.

4. Replication Factor Context

Default replication=3. For temp data, suggest 1-2 to save space:

hdfs dfs -setrep -w 1 /tmp/scratch/

For critical data, verify replication is honored:

hdfs fsck /data/critical -files -blocks -replicaDetails

5. Log Location Awareness

Hadoop logs scatter across machines. Key locations:

ComponentLog Path
NameNode/var/log/hadoop-hdfs/hadoop-hdfs-namenode-*.log
DataNode/var/log/hadoop-hdfs/hadoop-hdfs-datanode-*.log
ResourceManager/var/log/hadoop-yarn/yarn-yarn-resourcemanager-*.log
NodeManager/var/log/hadoop-yarn/yarn-yarn-nodemanager-*.log
Applicationyarn logs -applicationId <app_id>

6. Safe Mode Handling

NameNode enters safe mode on startup or low block count:

hdfs dfsadmin -safemode get        # Check status
hdfs dfsadmin -safemode leave      # Exit (if blocks OK)

Never force-leave if blocks are actually missing.

7. Memory Settings Matter

90% of "job killed" issues are memory:

# Container settings
yarn.nodemanager.resource.memory-mb     # Total per node
yarn.scheduler.minimum-allocation-mb    # Min container
mapreduce.map.memory.mb                 # Map task
mapreduce.reduce.memory.mb              # Reduce task

Check these before assuming code is wrong.

HDFS Operations

Essential Commands

# Navigation
hdfs dfs -ls /path
hdfs dfs -du -h /path              # Size with human units
hdfs dfs -count -q /path           # Quota info

# Data movement
hdfs dfs -put local.txt /hdfs/     # Upload
hdfs dfs -get /hdfs/file.txt .     # Download
hdfs dfs -cp /src /dst             # Copy within HDFS
hdfs dfs -mv /src /dst             # Move within HDFS

# Maintenance
hdfs dfs -rm -r /path              # Delete (trash)
hdfs dfs -rm -r -skipTrash /path   # Delete (permanent)
hdfs dfs -expunge                  # Empty trash

Block Management

# Find corrupt blocks
hdfs fsck / -list-corruptfileblocks

# Delete corrupt file (after confirming unrecoverable)
hdfs fsck /path/file -delete

# Force replication
hdfs dfs -setrep -w 3 /important/data/

YARN Job Management

Application Lifecycle

# List applications
yarn application -list                    # Running
yarn application -list -appStates ALL     # All states

# Application details
yarn application -status <app_id>

# Kill stuck application
yarn application -kill <app_id>

# Get logs (after completion)
yarn logs -applicationId <app_id>
yarn logs -applicationId <app_id> -containerId <container_id>

Queue Management

# List queues
yarn queue -list

# Queue status
yarn queue -status <queue_name>

# Move application between queues
yarn application -movetoqueue <app_id> -queue <target_queue>

Common Traps

  • Deleting without -skipTrash on full cluster → Trash still uses space, cluster stays full
  • Setting container memory below JVM heap → Instant container kill, confusing errors
  • Ignoring speculative execution on slow jobs → Wastes resources on duplicated tasks
  • Running fsck on busy cluster → Performance impact, run during maintenance
  • Assuming HDFS = POSIX semantics → No append-in-place, no random writes
  • Forgetting timezone in scheduling → Oozie/Airflow jobs fire at wrong times

Security & Privacy

Data that stays local:

  • Cluster notes saved in ~/hadoop/clusters/
  • Preferences and environment context

What commands access:

  • hdfs/yarn commands connect to your Hadoop cluster
  • Some commands read system paths (/var/log, /etc/hadoop/conf)
  • Destructive commands require explicit user confirmation

This skill does NOT:

  • Store credentials (use kinit/keytab separately)
  • Make external API calls beyond your cluster
  • Run destructive commands without asking first

Related Skills

Install with clawhub install <slug> if user confirms:

  • linux — system administration
  • docker — containerized deployments
  • bash — shell scripting

Feedback

  • If useful: clawhub star hadoop
  • Stay updated: clawhub sync

Comments

Loading comments...