Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Universal Agent

v1.0.0

This skill should be used when the user needs to execute tasks through a complete automated workflow: understand natural language intent, dynamically generat...

0· 76·0 current·0 all-time
by波动几何@wangjiaocheng
Security Scan
Capability signals
Crypto
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's declared purpose is to generate and execute commands/scripts end-to-end; the included Python implementation and SKILL.md are consistent with that capability. However the registry metadata declares no required environment variables or credentials while the code and docs show modes that require an LLM API key (config.json or LLM_API_KEY) for standalone operation and expect bridge-specific env vars (UA_THINK, UA_GENERATE_SCRIPT, UA_DEBUG_AND_FIX, UA_SUMMARIZE). That mismatch between declared requirements and actual code is a coherence issue.
!
Instruction Scope
SKILL.md and the script explicitly instruct the agent to auto-generate and execute arbitrary shell commands and Python scripts, access/modify files (memory, temp scripts, config.json), and call arbitrary APIs or control hardware. While this is consistent with a 'universal agent' purpose, the runtime instructions also rely on environment-based bridge communication (UA_* variables) and permit self-repair loops that can execute repaired code — broad discretion that can be misused and is not constrained by the registry metadata.
Install Mechanism
There is no install spec (instruction-only skill with bundled script), so nothing is downloaded or extracted at install time. This minimizes install-time risk; however, the skill includes a large standalone Python script that will be written to disk when installed and can execute arbitrary commands at runtime.
!
Credentials
Registry says 'no required env vars' but the code and docs expect an LLM API key for standalone mode (config.json or LLM_API_KEY) and use UA_* environment variables as the bridge protocol. The skill also persists memory and temp scripts to disk. The absence of declared credential requirements in metadata is inconsistent and could lead to users unknowingly supplying sensitive keys to a powerful executor.
Persistence & Privilege
always:false (not forced). The skill persists execution history/memory to a file (universal_agent_memory.json) and writes temporary script files when executing tasks. It does not declare modifying other skills or system configs, but its ability to run arbitrary commands/scripts implies it can alter system state — so limit scope and run under least privilege.
What to consider before installing
This skill truly executes arbitrary shell commands and generated Python code and thus has high potential impact. Specific points to consider before installing or running: - Metadata mismatch: the registry claims no required env vars, but the script uses an LLM API key (config.json or LLM_API_KEY) for standalone mode and expects UA_* env vars in bridge mode. Ask the publisher to correct the metadata. - Prefer Bridge mode with a trusted external 'brain' (external agent provides UA_* inputs) rather than Standalone mode, unless you fully trust and have reviewed the script. Bridge mode lets you control what code/commands are fed to the executor. - Do not run Standalone mode without reviewing the code yourself. The script will write temp scripts, persist a memory file, and can run arbitrary system/network commands — run it in a sandboxed container with minimal privileges and limited network access. - Do not include secrets or credentials in task descriptions. Remove or rotate any API keys stored in config.json before sharing the environment. - If you need to use it, set command/script timeouts low, leave dangerous_mode = false, and inspect/wipe the memory file regularly. If the publisher can (1) update the registry metadata to declare LLM_API_KEY and describe UA_* env vars explicitly, and (2) provide a clear, auditable safety policy or a hardened execution sandbox mode, my confidence in moving this to benign would increase.
scripts/universal_agent.py:943
Dynamic code execution detected.
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

latestvk973h4sxja5tfn2k626z6dhsr1848kq6
76downloads
0stars
1versions
Updated 2w ago
v1.0.0
MIT-0

Universal Agent Skill

A minimal universal AI agent that automates end-to-end task execution: understand user intent in natural language, generate commands or scripts, execute them, analyze results, and self-recover from errors.

Architecture

Natural Language Input
       ↓
  ┌─────────────┐
  │ LLM (Brain) │ Understand intent, generate command/Python script
  └──────┬──────┘
         │ Auto-generate code/command
         ↓
  ┌─────────────────┐
  │ Command Executor │ Execute any command, control software & hardware
  │ (Limbs)          │
  └───────┬─────────┘
          │ Actual execution
          ↓
     Task Complete ✅

File Structure

universal-agent/
├── SKILL.md                    # This file (skill definition)
├── scripts/
│   ├── universal_agent.py      # Main program (complete standalone implementation)
│   └── config.json             # Configuration file (fill in API key for standalone mode)
└── references/
    └── README.md               # Detailed usage documentation

When to Use

Use this skill when:

  • User describes a task in natural language that requires automated execution
  • Task needs dynamic code generation (Python script) and immediate execution
  • Task involves file operations, data processing, system administration, CLI tools, API calls
  • User wants end-to-end automation without manual intervention
  • Keywords: "万能agent", "universal agent", "自动执行", "动态生成代码", "生成并执行", "帮我做XX"

How It Works

Automated Workflow (4 Steps)

  1. Think — LLM understands task, judges complexity, decides whether to generate a shell command or Python script
  2. Execute — Auto-write file → Run command/script → Capture output
  3. Fix — On error, LLM analyzes error, auto-fixes code, retries (up to 2 times)
  4. Summarize — Translates technical output into human-friendly language

Why It's "Universal"

CapabilityDescription
Shell CommandsFile ops, process management, system admin
Python ScriptsData processing, web scraping, ML, image processing
CLI Toolsgit, docker, ffmpeg, aws, any CLI
Hardware ControlSerial/GPIO/network-controlled physical devices
API CallsAny HTTP API

Command executor can run Python → Python can do anything → Agent can do anything

Usage Modes

This skill supports three distinct usage modes, each suited to different scenarios:

Mode 1: Standalone (独立运行)

Run the bundled script directly as an independent program. The script handles everything internally — LLM calls, command execution, safety checks, retries, memory.

# Single task mode (needs API key)
python scripts/universal_agent.py --run "task description"

# Interactive mode
python scripts/universal_agent.py

# With environment variables
set LLM_API_KEY=sk-xxx && python scripts/universal_agent.py --run "任务"

What works: Safety ✅ | Auto-retry ✅ | Memory persistence ✅ | Needs API Key.


Mode 2: Bridge Execution (桥接执行 — 推荐)

Execute the script with --backend bridge. The script's brain is provided by the external Agent that loaded this Skill, while the script itself handles execution, safety, retry, and memory. Any Agent with LLM + command execution can use this.

# Basic bridge execution
python scripts/universal_agent.py --backend bridge --run "任务描述"

# View full protocol spec
python scripts/universal_agent.py --bridge-info

How it works — the Agent drives the script through environment variables:

┌─────────────────────────────────────────────────────────────┐
│                    Bridge Mode Flow                         │
├─────────────────────────────────────────────────────────────┤
│                                                             │
│  ① User: "列出当前目录的文件"                                │
│       ↓                                                     │
│  ② External Agent LLM → generates decision                  │
│     set UA_THINK={"type":"command","content":"dir /b"}      │
│       ↓                                                     │
│  ③ Agent executes:                                          │
│     python ... --backend bridge --run "列出当前目录的文件"   │
│       ↓                                                     │
│  ④ Script reads UA_THINK → runs "dir /b"                    │
│     → safety check passes → captures output                 │
│       ↓ (if error)                                          │
│  ⑤ Script requests fix via UA_DEBUG_AND_FIX env var         │
│       ↓                                                     │
│  ⑥ External Agent provides fixed code                       │
│     set UA_DEBUG_AND_FIX="fixed_command_or_script"           │
│       ↓                                                     │
│  ⑦ Script re-executes → success                             │
│       ↓                                                     │
│  ⑧ Script reads UA_SUMMARIZE for final output               │
│     → returns structured JSON result                        │
│                                                             │
└─────────────────────────────────────────────────────────────┘

Environment Variable Protocol:

VariableWhen UsedFormat
UA_THINKStep 1 — decisionJSON: {"type":"command|script","content":"...","explanation":"..."}
UA_GENERATE_SCRIPTIf type=script and code neededComplete Python source code
UA_SUMMARIZEFinal step — result summaryNatural language summary text
UA_DEBUG_AND_FIXOn error retry — fixed codeFixed Python/shell code

What works: Safety ✅ | Auto-retry ✅ | Memory persistence ✅ | No API Key needed (Agent provides LLM).

Who can use this: WorkBuddy, Cursor, Continue.dev, Aider, Cline, any AI IDE/tool with LLM + shell access.


Mode 3: Inline Simulation (模拟执行)

The loaded Agent reads this SKILL.md, learns the architecture pattern, and simulates the workflow using its own native capabilities without executing the script at all. The script serves as a reference/teaching example only.

  • Agent uses its own LLM instead of LLMBrain
  • Agent uses its own execute_command instead of UniversalExecutor
  • Agent does its own summarization

What works: Fastest ⚡ | No setup | Safety ❌ Retry ❌ Memory ❌ (script features unused).

Core Components

scripts/universal_agent.py — Main Program

Four core classes implementing the full agent:

ClassRoleKey Methods
LLMBrainBrain — HTTP LLM interface (Mode 1)think(), generate_script(), summarize(), debug_and_fix()
AgentBridgeBrain — External Agent bridge (Mode 2)think(), generate_script(), summarize(), debug_and_fix(), set_response()
UniversalExecutorLimbs (command execution)execute(), _execute_command(), _execute_script(), _check_danger()
ContextManagerMemory (state management)add_task_record(), get_context_string(), save()/load()
UniversalAgentMain orchestratorrun(), chat(), batch_run()

See references/README.md for full API documentation and examples.

Safety Mechanisms

The executor includes built-in danger detection:

LevelExamplesHandling
🔴 Highrm -rf /, format C:Forced confirmation required
🟡 Mediumpip uninstall, sudoWarning prompt
🟢 Lowls, cat, python script.pyDirect execution

Danger patterns are defined in HIGH_DANGER_PATTERNS and MEDIUM_DANGER_PATTERNS within the script.

Configuration

Mode 1 (Standalone) — Needs API Key

Option A — Config File: Edit scripts/config.json and fill in your API key.

Option B — Environment Variables:

set LLM_API_KEY=your-key-here
set LLM_MODEL=gpt-4o
set LLM_BASE_URL=https://api.openai.com/v1

Option C — Local Ollama (Free):

ollama run llama3
# Then select ollama_llama3 preset when starting the script

Configuration priority: Environment variables > config.json > Interactive input.

Mode 2 (Bridge) — No API Key Needed

The external Agent provides all LLM capabilities. Configure only optional settings:

# Optional: change input source from env to file
set UA_INPUT_SOURCE=file

# Optional: skip safety confirmations (not recommended)
# Use --dangerous flag instead

Mode 3 (Simulation) — No Configuration Needed

Agent uses its own native capabilities. Nothing to configure.

Supported LLM Providers

ProviderModelsbase_url
OpenAIgpt-4o, gpt-4o-minihttps://api.openai.com/v1
DeepSeekdeepseek-chat, deepseek-reasonerhttps://api.deepseek.com
Qwenqwen-max, qwen-turbohttps://dashscope.aliyuncs.com/compatible-mode/v1
Zhipu GLMglm-4-plushttps://open.bigmodel.cn/api/paas/v4
Local Ollamallama3, qwen2, any modelhttp://localhost:11434/v1
Groqllama-3.1-70b-versatilehttps://api.groq.com/openai/v1
Any OpenAI-compatible APIanyyour-url

Platform Support

Cross-platform — Windows, macOS, Linux:

OSShell Backend
Windowscmd.exe /c (with CREATE_NO_WINDOW)
macOSbash (shell=True)
Linuxbash (shell=True)

All file I/O uses UTF-8 encoding. Python script execution uses sys.executable for platform-agnostic invocation.

Dependencies

Zero external dependencies — Python standard library only:

  • os, sys — System operations
  • subprocess — Command execution
  • json, re — JSON parsing and regex
  • time/datetime — Time handling
  • urllib — HTTP requests (fallback)

Optional:

  • requests library — Better HTTP support (pip install requests)

Free Options

  1. Ollama + local model (completely free, unlimited, private)
  2. DeepSeek (~¥1/million tokens, excellent cost-performance)
  3. Groq Cloud (free tier available, ultra-fast inference)

Comments

Loading comments...