Code Optimizer

WarnAudited by ClawScan on May 10, 2026.

Overview

The skill describes a code optimizer, but its deployment script would copy unreviewed code from a hard-coded local path and persistently change Hermes workflow settings.

Do not run scripts/deploy.sh unless you trust and have reviewed the missing optimizer source at the hard-coded path. Prefer a version that bundles its evaluator and models, declares its install steps, asks before changing Hermes config, and provides clear cleanup instructions.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

If the script is run, the skill may execute code that was not included in the package review, or fail unpredictably if that developer path does not exist.

Why it was flagged

Core runtime code is copied from a hard-coded local directory outside the skill package, so the deployed evaluator code and models are not provenance-controlled or reviewable from the supplied artifacts.

Skill content
OPTIMIZER_SRC="/Users/apple/.openclaw/workspace/claude_optimization"
...
cp -r "$OPTIMIZER_SRC/evaluator" "$HERMES_DIR/optimizer/"
cp "$OPTIMIZER_SRC/auto_evaluator.py" "$HERMES_DIR/optimizer/"
Recommendation

Bundle the evaluator and model files in the skill package or fetch them from a pinned, verified source; remove hard-coded developer paths and declare the install mechanism.

What this means

Running code-eval after deployment could execute unknown local Python code under the user's account.

Why it was flagged

The deployment script creates an executable CLI that imports and runs modules copied from the unreviewed optimizer source directory.

Skill content
cat > "$HERMES_DIR/bin/code-eval" << 'CLIEOF'
...
from auto_evaluator import CodeEvaluator
...
chmod +x "$HERMES_DIR/bin/code-eval"
Recommendation

Do not run the deployment script until the imported evaluator code is included, reviewed, and installed through a declared, reproducible process.

What this means

After deployment, future Hermes/code-generation workflows may be automatically evaluated and influenced by this skill, and the code-eval command remains installed in the user's home path.

Why it was flagged

The script persistently enables automatic optimizer behavior in Hermes and creates a lasting command symlink without showing a user approval or rollback flow.

Skill content
config['code_optimizer'] = {
        'enabled': True,
        'auto_evaluate': True,
        'feedback_loop': True,
...
ln -sf "$HERMES_DIR/bin/code-eval" "$HOME/bin/code-eval"
Recommendation

Require explicit opt-in before changing Hermes config, create backups, document an uninstall path, and avoid overwriting or shadowing user commands without confirmation.

What this means

Private code-quality results or incorrect evaluations could persist and affect future code-generation decisions.

Why it was flagged

The skill explicitly stores evaluation results in a memory system and uses them to influence later strategy selection.

Skill content
评估结果存入记忆系统
ML 数据集持续扩充
策略选择融入任务规划
Recommendation

Use only with projects where persistent evaluation history is acceptable, and look for controls to review, clear, or disable stored feedback.