AWS Redshift Skills

v0.1.0

AWS Redshift interaction skill for managing Redshift Provisioned, Redshift Serverless, and executing SQL queries via the Redshift Data API. Manage clusters,...

0· 115·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description match the included code: the package provides boto3-based clients and tool functions to manage Redshift Provisioned, Serverless, and the Data API. Required binary (python3) and optional env vars (cluster/workgroup names, IAM role, secret ARNs) align with that purpose. Minor oddity: primaryEnv is set to AWS_REGION (a region, not a credential) which is unusual but does not break functionality.
Instruction Scope
SKILL.md and the scripts limit behavior to AWS Redshift/S3 APIs via boto3, reading optional local KEY=VALUE config files and environment variables. Instructions ask to use boto3's default credential chain and to put temporary files under ./tmp. There are no instructions to read unrelated system files, contact external endpoints beyond AWS, or exfiltrate data.
Install Mechanism
No external download/install mechanism is declared (no installer script). The repo contains Python code and documents that boto3 is required; installing boto3 from PyPI is standard. No opaque URL downloads or extracted archives are present.
Credentials
The skill declares only AWS_REGION as a required env var, which is minimal and expected. The code uses boto3's default credential chain, meaning it will use any AWS credentials accessible in the runtime (env vars, config file, IAM role, or instance profile). That is appropriate for AWS management but important to note: the agent will inherit whatever AWS permissions the host/agent has and can perform destructive operations (create/delete/resize snapshots or clusters) if the credentials permit.
Persistence & Privilege
always:false and model invocation is allowed (the platform default). The skill does not request system-wide persistent privileges or modify other skills. It does suggest writing temporary files under ./tmp and modifying sys.path to import the skill — normal for a local Python skill.
Assessment
This skill appears to be a legitimate Redshift management tool and will use boto3 (the AWS SDK). Before installing or enabling it: 1) Ensure the agent/runtime uses least-privilege AWS credentials — grant only the Redshift/S3 permissions you intend (consider a read-only role if you only need queries). 2) Review and restrict IAM permissions if you do not want cluster creation/deletion or snapshot restores. 3) Note that the skill will honor boto3's default credential chain (environment vars, shared config, instance role), so any credentials available to the host will be used. 4) The registry metadata sets AWS_REGION as the primaryEnv (odd but harmless); confirm you supply the intended region and optional REDSHIFT_* env vars. 5) If you need stronger guarantees, run the skill in an environment scoped to a non-production AWS account or with a role limited to safe actions (e.g., only Data API read operations).

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🔴 Clawdis
Binspython3
EnvAWS_REGION
Primary envAWS_REGION
latestvk9794rxvqaz5x91acbvrztaqqs83594f
115downloads
0stars
1versions
Updated 1mo ago
v0.1.0
MIT-0

AWS Redshift Skills

A Python skill for interacting with AWS Redshift across two deployment modes: Redshift Provisioned and Redshift Serverless, plus a shared Data API for SQL execution.

When to Use (Trigger Phrases)

Invoke this skill when the user mentions:

"List Redshift clusters"
"Create a Redshift cluster"
"Pause my Redshift cluster"
"List Redshift Serverless workgroups"
"Create a namespace"
"Run a SQL query on Redshift"
"Execute COPY from S3 to Redshift"
"UNLOAD data to S3"
"Check query status"
"List tables in Redshift"
"Describe Redshift table columns"
"Resize my Redshift cluster"

Any request involving Redshift Provisioned clusters/snapshots, Redshift Serverless workgroups/namespaces, or Data API SQL execution.

Feature List

Redshift Provisioned

  • Clusters: List, describe, create, delete, resize, pause, resume, reboot clusters
  • Snapshots: Create, describe, restore from, delete snapshots

Redshift Serverless

  • Workgroups: List, get, create, update, delete workgroups (RPU configuration)
  • Namespaces: List, get, create, delete namespaces (database configuration)

Redshift Data API

  • SQL Execution: Execute single or batch SQL statements (async or sync with polling)
  • Results: Get query results with pagination
  • Lifecycle: Describe statement status, list recent statements, cancel running queries
  • Metadata: List databases, schemas, tables; describe table columns
  • Data Movement: COPY from S3, UNLOAD to S3

Initial Setup

  1. Python 3.8+ with boto3>=1.26.0:

    pip install boto3>=1.26.0
    
  2. AWS credentials via boto3 default chain (env vars, config files, IAM roles).

  3. Environment variables (all optional, validated at point of use):

    export AWS_REGION="us-east-1"
    
    # Redshift Provisioned
    export REDSHIFT_CLUSTER_ID="my-cluster"
    export REDSHIFT_DATABASE="dev"
    export REDSHIFT_DB_USER="admin"
    
    # Redshift Serverless
    export REDSHIFT_WORKGROUP_NAME="my-workgroup"
    export REDSHIFT_NAMESPACE_NAME="my-namespace"
    
    # Shared
    export REDSHIFT_IAM_ROLE_ARN="arn:aws:iam::123456789:role/redshift-role"
    export REDSHIFT_S3_LOG_URI="s3://my-bucket/redshift-logs/"
    export REDSHIFT_SECRET_ARN="arn:aws:secretsmanager:us-east-1:123456789:secret:my-secret"
    

How to Manage Redshift

1. Redshift Provisioned

Traditional Redshift provisioned clusters with dedicated compute nodes.

  • Cluster & snapshot management: scripts/provisioned/redshift_provisioned_cli.py — 12 @tool functions
  • Detailed guide: references/provisioned/cluster_guide.md — Cluster lifecycle, node types, resize
  • Detailed guide: references/provisioned/snapshot_guide.md — Snapshot create/restore/share

2. Redshift Serverless

Fully managed serverless data warehouse with automatic scaling.

  • Workgroup & namespace management: scripts/serverless/redshift_serverless_cli.py — 9 @tool functions
  • Detailed guide: references/serverless/workgroup_guide.md — Workgroup management
  • Detailed guide: references/serverless/namespace_guide.md — Namespace management

3. Redshift Data API

SQL execution via the Data API. Works with both Provisioned and Serverless.

  • Query execution & metadata: scripts/data_api/redshift_data_cli.py — 12 @tool functions
  • Detailed guide: references/data_api/query_guide.md — SQL execution, COPY/UNLOAD

Available Scripts

ScriptDescription
scripts/provisioned/redshift_provisioned_cli.pyRedshift Provisioned @tool functions (12 tools)
scripts/serverless/redshift_serverless_cli.pyRedshift Serverless @tool functions (9 tools)
scripts/data_api/redshift_data_cli.pyRedshift Data API @tool functions (12 tools)
scripts/config/redshift_config.pyUnified configuration management
scripts/client/boto_client.pyboto3 client factory

References

DocumentDescription
references/provisioned/cluster_guide.mdRedshift Provisioned cluster management guide
references/provisioned/snapshot_guide.mdRedshift snapshot management guide
references/serverless/workgroup_guide.mdRedshift Serverless workgroup management guide
references/serverless/namespace_guide.mdRedshift Serverless namespace management guide
references/data_api/query_guide.mdRedshift Data API SQL execution guide

Requirements

  • When writing temporary files (scripts, notes, etc.), place them in the ./tmp folder.
  • When importing scripts packages, add the skill root to path: sys.path.append(${redshift_skill_root})
  • AWS credentials are handled by boto3's default credential chain — never pass access keys directly.
  • All configuration environment variables are optional and validated at the point of use.

Data Privacy & Trust

  • No credential storage: AWS credentials are resolved via boto3 default chain. No keys are stored or logged.
  • Secret masking: All functions automatically mask potential AWS credentials in output.
  • Read-only by default: Most operations are read-only queries. Write operations (cluster creation, deletion) require explicit user action.

External Endpoints

This skill connects to:

  • AWS Redshift API (redshift.{region}.amazonaws.com)
  • AWS Redshift Serverless API (redshift-serverless.{region}.amazonaws.com)
  • AWS Redshift Data API (redshift-data.{region}.amazonaws.com)
  • AWS S3 API (s3.{region}.amazonaws.com) — for COPY/UNLOAD operations

Comments

Loading comments...