Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Cluster

v1.0.0

Perform data clustering analysis using k-means and hierarchical algorithms. Use when you need to group, classify, or segment datasets.

0· 217·1 current·1 all-time
byBytesAgain2@ckchzh

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for ckchzh/cluster.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Cluster" (ckchzh/cluster) from ClawHub.
Skill page: https://clawhub.ai/ckchzh/cluster
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install cluster

ClawHub CLI

Package manager switcher

npx clawhub@latest install cluster
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description match the actual implementation. The included scripts implement k-means and simple hierarchical clustering, evaluation metrics, import/export, and listing of runs — all coherent with a clustering tool.
Instruction Scope
Runtime instructions are narrowly scoped: they tell the agent/user to supply input file paths and flags via environment variables and to run the bundled script. The code reads the specified input file(s), writes results to ~/.cluster/data.jsonl and config to ~/.cluster/config.json, and does not access unrelated system paths, network endpoints, or other credentials.
Install Mechanism
There is no install spec; this is instruction-only plus a bundled script. The script is executed locally and no external packages or remote downloads are performed. The Python code uses only the standard library.
Credentials
The skill does not request any credentials or special environment variables from the registry. Runtime environment variables (INPUT, K, ALGORITHM, RUN_ID, etc.) are normal for command-line tools and relate directly to the tool's purpose. No secrets or unrelated service keys are required.
Persistence & Privilege
always is false and the skill does not request persistent platform privileges. It creates and writes its own data and config under ~/.cluster, which is expected for this kind of tool and does not modify other skills or system-wide agent settings.
Assessment
This skill runs locally and appears to do only local clustering: you must provide input files via the INPUT environment variable and results are appended to ~/.cluster/data.jsonl and config is stored in ~/.cluster/config.json. Recommended precautions before installing/using: (1) review the bundled scripts (scripts/script.sh) yourself — they will be executed on your machine; (2) avoid pointing INPUT at sensitive files unless you are comfortable with the tool recording the input_file path and derived results in ~/.cluster; (3) if you need to remove traces, delete ~/.cluster; (4) ensure Python 3.8+ is available and be mindful of memory/CPU for large datasets. Nothing in the package indicates network communication or credential exfiltration.

Like a lobster shell, security has layers — review code before you run it.

latestvk9761wyywfmxzxkwz8z41mj51x834nmj
217downloads
0stars
1versions
Updated 22h ago
v1.0.0
MIT-0

Cluster — Data Clustering Analysis Tool

Cluster is a command-line data clustering analysis tool that supports k-means and hierarchical clustering algorithms. It reads numerical data from CSV/JSONL sources, performs clustering, evaluates cluster quality, and exports results.

Data is stored in ~/.cluster/data.jsonl as JSONL records. Each record represents a clustering run with its parameters, assignments, centroids, and evaluation metrics.

Prerequisites

  • Python 3.8+ with standard library (no external packages required for basic operations)
  • bash shell

Commands

run

Run a clustering algorithm on input data.

Environment Variables:

  • INPUT (required) — Path to input CSV/JSONL file with numerical data
  • K — Number of clusters (default: 3)
  • ALGORITHM — Algorithm to use: kmeans or hierarchical (default: kmeans)
  • MAX_ITER — Maximum iterations for k-means (default: 100)
  • SEED — Random seed for reproducibility

Example:

INPUT=/path/to/data.csv K=5 ALGORITHM=kmeans bash scripts/script.sh run

assign

Assign new data points to existing clusters from a previous run.

Environment Variables:

  • RUN_ID (required) — ID of the clustering run to use
  • INPUT (required) — Path to new data points (CSV/JSONL)

Example:

RUN_ID=abc123 INPUT=/path/to/new_data.csv bash scripts/script.sh assign

centroids

Display or export centroid coordinates for a clustering run.

Environment Variables:

  • RUN_ID (required) — ID of the clustering run
  • FORMAT — Output format: table, json, csv (default: table)

evaluate

Evaluate clustering quality with silhouette score, inertia, and Davies-Bouldin index.

Environment Variables:

  • RUN_ID (required) — ID of the clustering run to evaluate

visualize

Generate a text-based or ASCII visualization of cluster assignments.

Environment Variables:

  • RUN_ID (required) — ID of the clustering run
  • DIMS — Dimensions to plot, comma-separated (default: first two)

export

Export clustering results to a file.

Environment Variables:

  • RUN_ID (required) — ID of the run to export
  • OUTPUT — Output file path (default: stdout)
  • FORMAT — Export format: json, csv, jsonl (default: json)

import

Import a previously exported clustering run.

Environment Variables:

  • INPUT (required) — Path to the file to import

config

View or update configuration settings.

Environment Variables:

  • KEY — Configuration key to set
  • VALUE — Configuration value

list

List all stored clustering runs with summary info.

Environment Variables:

  • LIMIT — Maximum runs to display (default: 20)
  • SORT — Sort field: date, k, score (default: date)

stats

Show aggregate statistics across all clustering runs.

help

Display usage information and available commands.

version

Display the current version of the cluster tool.

Data Storage

All clustering runs are stored in ~/.cluster/data.jsonl. Each line is a JSON object with fields:

  • id — Unique run identifier
  • timestamp — ISO 8601 creation time
  • algorithm — Algorithm used
  • k — Number of clusters
  • centroids — List of centroid coordinates
  • assignments — Mapping of data point indices to cluster IDs
  • metrics — Evaluation metrics (silhouette, inertia, etc.)
  • input_file — Source data file path
  • num_points — Number of data points clustered

Configuration

Config is stored in ~/.cluster/config.json. Available keys:

  • default_k — Default number of clusters (default: 3)
  • default_algorithm — Default algorithm (default: kmeans)
  • max_iterations — Default max iterations (default: 100)
  • random_seed — Default random seed (default: 42)

Powered by BytesAgain | bytesagain.com | hello@bytesagain.com

Comments

Loading comments...