Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Task Automator

Automate repetitive computer tasks including file operations, data processing, web scraping, and API integrations. Use when you need to batch process files,...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 177 · 1 current installs · 1 all-time installs
byYinanping@yinanping-CPU
MIT-0
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description and the included run_task.py align with file organization and data conversion capabilities. However, the SKILL.md advertises broader features (scheduling, workflows, API sync, e-commerce integrations) that are either placeholders in run_task.py or implemented by scripts referenced in the docs but not included in the bundle (e.g., schedule_task.py, run_workflow.py, convert_data.py, base_task). The examples live under tasks-examples/ while documentation refers to tasks/ which is inconsistent.
!
Instruction Scope
SKILL.md instructs running several scripts that do not exist in the package (schedule_task.py, run_workflow.py, convert_data.py, etc.). The provided run_task.py performs real file operations (shutil.move) and will move files under the user's home by default, which is expected for a file organizer but can be destructive if misconfigured. The docs also recommend storing API keys in environment variables, yet the code has placeholders for API/web/e‑commerce tasks and does not show how secrets would be used—this mismatch increases risk because users may supply credentials to workflows that are not fully implemented or audited.
Install Mechanism
There is no install specification (instruction-only with one code file). Nothing is downloaded or extracted during install, which minimizes supply-chain risk.
Credentials
The registry declares no required environment variables, but SKILL.md recommends using environment variables/.env for API keys and secrets. For real API or e‑commerce usage the skill will require credentials, but those are not documented as required in the metadata. This disconnect means credentials would be provided ad hoc (not scoped or validated), increasing risk if users supply high‑privilege keys without understanding where/how they are used.
Persistence & Privilege
The skill is not always-enabled and does not request elevated platform privileges. It writes logs to a local logs/ directory and can create/move files/directories under user paths; that is expected behavior for an automation tool but should be used with caution.
What to consider before installing
Proceed cautiously. The package appears incomplete: SKILL.md references several scripts and a base_task that are not included, and API/e‑commerce features are placeholders. Before running: 1) Inspect the code (especially any scripts that perform file operations) and test with --dry-run on non-critical directories; 2) Don’t provide real API credentials or schedule jobs until you have the missing workflow/scheduler code and have reviewed how secrets are used; 3) Run the tool in a sandbox or VM first to observe behavior; 4) Ask the publisher for the complete source (schedule_task.py, run_workflow.py, convert_data.py, base_task, tasks/ registry) and for clarification on how API keys will be handled. If you need production API/e‑commerce automation, prefer a package that documents required credentials and includes the referenced scripts.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk976xvgngx0f933typa5c4v6ns82f5wk

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Task Automator

Overview

Universal task automation skill for OpenClaw. Automate file operations, data processing, API calls, and create custom workflows with scheduling support.

Use Cases

  • File Operations: Batch rename, convert, organize files
  • Data Processing: CSV/JSON/Excel manipulation, data cleaning
  • API Integration: Connect multiple services, sync data
  • Scheduled Tasks: Cron-like automation, recurring jobs
  • Web Automation: Scrape, monitor, alert
  • E-commerce: Order processing, inventory sync, price updates

Quick Start

Run a Simple Task

python scripts/run_task.py --task file_organizer --config tasks/organize.json

Schedule a Recurring Task

python scripts/schedule_task.py --task data_backup --cron "0 2 * * *"

Create a Workflow

python scripts/run_workflow.py --workflow ecommerce_sync

Built-in Tasks

1. File Organizer

Organize files by type, date, or custom rules.

Config (tasks/organize.json):

{
  "source": "~/Downloads",
  "destination": "~/Organized",
  "rules": [
    {"extension": ".pdf", "folder": "Documents"},
    {"extension": ".jpg", "folder": "Images"},
    {"extension": ".mp4", "folder": "Videos"}
  ]
}

2. Data Converter

Convert between CSV, JSON, Excel formats.

Usage:

python scripts/convert_data.py --input data.csv --output data.json --format json

3. API Sync

Sync data between two APIs.

Config (tasks/api_sync.json):

{
  "source": {
    "type": "api",
    "url": "https://api.source.com/data",
    "auth": "bearer_token"
  },
  "destination": {
    "type": "api",
    "url": "https://api.dest.com/items",
    "auth": "api_key"
  },
  "mapping": {
    "source_field": "dest_field"
  }
}

4. Web Monitor

Monitor websites and send alerts.

Config (tasks/monitor.json):

{
  "urls": [
    {"url": "https://example.com/product", "check": "price < 100"}
  ],
  "alert": {
    "type": "email",
    "to": "you@example.com"
  }
}

5. E-commerce Order Processor

Process orders from Taobao/Douyin stores.

Config (tasks/order_process.json):

{
  "stores": ["taobao", "douyin"],
  "actions": [
    "fetch_new_orders",
    "update_inventory",
    "generate_shipping_labels",
    "send_confirmation_email"
  ]
}

Scripts

run_task.py

Execute a single automated task.

Arguments:

  • --task - Task name
  • --config - Task configuration file
  • --dry-run - Simulate without executing
  • --verbose - Detailed logging

schedule_task.py

Schedule recurring tasks.

Arguments:

  • --task - Task name
  • --cron - Cron expression (e.g., "0 2 * * *")
  • --config - Task config file

run_workflow.py

Execute multi-step workflows.

Arguments:

  • --workflow - Workflow name
  • --steps - Run specific steps only
  • --continue-on-error - Don't stop on errors

Creating Custom Tasks

Step 1: Create Task Script

# scripts/tasks/my_task.py
from base_task import BaseTask

class MyTask(BaseTask):
    def run(self, config):
        # Your automation logic here
        self.log("Starting task...")
        
        # Process
        result = self.process(config)
        
        # Return status
        return {"status": "success", "data": result}

Step 2: Create Configuration

{
  "name": "my_task",
  "description": "What this task does",
  "config_schema": {
    "required": ["input_path", "output_path"],
    "properties": {
      "input_path": {"type": "string"},
      "output_path": {"type": "string"}
    }
  }
}

Step 3: Register Task

Add to tasks/registry.json:

{
  "my_task": {
    "script": "tasks/my_task.py",
    "config": "tasks/my_task.json"
  }
}

Workflows

Workflows chain multiple tasks together.

Example: E-commerce Daily Sync

name: ecommerce_daily_sync
steps:
  - task: fetch_orders
    stores: [taobao, douyin]
  - task: update_inventory
    sync_stores: true
  - task: generate_reports
    format: excel
  - task: send_summary
    channel: email

Scheduling

Use cron expressions for scheduling:

ExpressionMeaning
0 * * * *Every hour
0 2 * * *Daily at 2 AM
0 9 * * 1-5Weekdays at 9 AM
0 0 1 * *First of every month

Integration with E-commerce

Taobao/Douyin Store Automation

# Daily order sync
python scripts/run_task.py --task order_sync --config tasks/taobao_sync.json

# Inventory update
python scripts/run_task.py --task inventory_update --config tasks/inventory.json

# Price monitoring
python scripts/run_task.py --task price_monitor --config tasks/prices.json

Best Practices

  1. Test with --dry-run before running live
  2. Log everything for debugging
  3. Handle errors gracefully with retries
  4. Use environment variables for secrets
  5. Schedule wisely to avoid rate limits
  6. Monitor task health with status checks

Security

  • Store API keys in environment variables
  • Use .env files (never commit them)
  • Validate all inputs
  • Sanitize file paths to prevent directory traversal
  • Rate limit API calls

Troubleshooting

  • Task fails silently: Check logs in logs/ directory
  • API rate limited: Add delays between requests
  • File not found: Verify paths are absolute or relative to workspace
  • Permission denied: Check file/folder permissions

Files

4 total
Select a file
Select a file to preview.

Comments

Loading comments…