Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Lightweight Autoresearch V2

v1.0.0

CPU-based autonomous optimization loop for skill quality improvement. Runs experiments, evaluates results, keeps improvements. Use when: 自主优化, skill optimiza...

0· 57·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for sjj2026/shike-autoresearch.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Lightweight Autoresearch V2" (sjj2026/shike-autoresearch) from ClawHub.
Skill page: https://clawhub.ai/sjj2026/shike-autoresearch
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install shike-autoresearch

ClawHub CLI

Package manager switcher

npx clawhub@latest install shike-autoresearch
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (autoresearch / optimization loop) align with the actions described: editing experiment.py, running experiments, recording results, and using git to commit/revert. Requiring git and python3 is proportionate to that purpose.
!
Instruction Scope
SKILL.md explicitly instructs the agent to modify experiment.py, run subprocesses, execute experiments, write results.tsv, and run git commit/revert inside a target directory. While the document defines human confirmation checkpoints, it does not ship any run_loop.py or experiment scripts and gives broad discretion to change code in the target—this grants the agent the ability to make arbitrary changes to files in the target path and to execute them. There is no enforcement mechanism described to limit file scope, prevent exfiltration, or require the human confirmations to actually occur before commits.
Install Mechanism
Instruction-only skill with no install spec and no code files to execute from the package. This minimizes install-time risks (nothing downloaded or written by an installer).
Credentials
No environment variables, credentials, or special config paths are requested by the skill metadata. The operations described (file edits, subprocess runs, git) do not require additional external credentials from the skill itself. This is proportionate, though edits could indirectly cause the target project to use its own secrets if present.
!
Persistence & Privilege
The skill is not marked always:true (so it won't be force-included), and disable-model-invocation is false (normal). However, because the skill's core behavior is to autonomously modify and run code and to commit/revert changes, allowing autonomous invocation without additional safety controls increases risk: the agent could make and commit changes without adequate human review unless the platform or agent enforces the SKILL.md checkpoints.
What to consider before installing
This skill is coherent for its stated purpose but grants an agent the ability to edit, run, and commit code inside a target directory — an action with high potential impact. Before installing or running it: (1) only point it at a disposable or sandboxed repository (not production code or repos containing secrets); (2) require and verify human review of diffs before any git commit; (3) ensure the platform enforces the checkpoints (don’t rely solely on prose in SKILL.md); (4) inspect any run_loop.py/experiment.py provided by the target repo before execution; (5) monitor subprocess calls and network activity while the skill runs. If the publisher can supply the actual run_loop.py implementation and an explicit enforcement mechanism for confirmations and sandboxing, re-evaluate — that information would raise confidence and could change this assessment to benign.

Like a lobster shell, security has layers — review code before you run it.

latestvk97dswjn5m3ck18ybe5kenagtd84zqja
57downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Shike Autoresearch - 自主优化循环

基于 Karpathy autoresearch 的 CPU 版本自主优化循环


核心理念

评估 → 改进 → 实测验证 → 人类确认 → 保留或回滚


必需依赖

  • git - 版本控制和代码回滚
  • python3 - 运行实验代码
  • subprocess - 执行实验脚本

权限说明

本技能需要以下权限(均为功能需求,限制在工作目录内):

  • 修改 experiment.py 文件
  • 运行 subprocess 执行实验
  • 写入 results.tsv 记录结果
  • 执行 git commitgit revert

三文件架构

文件作用谁修改
config.py配置参数、评估指标只读(人类维护)
experiment.py实验代码、测试逻辑Agent 自主修改
results.tsv实验记录自动追加

自主循环流程

LOOP:
1. 查看当前配置状态
2. [检查点1] 确认优化方向
3. 修改 experiment.py
4. [检查点2] 确认改动内容
5. 运行实验
6. 提取结果
7. [检查点3] 确认是否继续
8. 判断:改进 → 保留 / 未改进 → reset
9. 记录到 results.tsv
10. [检查点4] 每10轮复盘
11. 重复

适用场景

1. 技能包优化

  • 自动测试不同 prompt 配置
  • 自动评估技能成功率
  • 找到最优技能结构

2. 策略回测

  • 自动测试不同参数组合
  • 自动评估收益率
  • 找到最优策略配置

3. 内容创作测试

  • 自动测试不同写作风格
  • 自动评估内容质量
  • 找到最优内容策略

关键检查点

检查点1:确认优化方向

触发时机:评估skill前 用户确认:当前最优配置、拟改进方向

检查点2:确认改动内容

触发时机:代码修改后,运行前 用户确认:git diff、改动说明

检查点3:结果验收

触发时机:优化完成后 用户确认:前后分数对比、是否保留

检查点4:定期复盘

触发时机:每10轮迭代后 用户确认:整体进度、趋势、资源消耗


停止条件

自动停止

  • 达到最大迭代次数(默认100次)
  • 连续10次无改进
  • 资源耗尽

人工干预

  • Ctrl+C - 优雅停止,保存当前状态
  • SIGTERM/SIGINT - 接收信号停止

使用方式

cd /path/to/skill-directory
python3 run_loop.py --mode skill --target ./my-skill

参数说明

  • --mode - 优化模式(skill/strategy/content)
  • --target - 目标路径
  • --iterations - 迭代次数(默认100)
  • --timeout - 单次实验超时(秒,默认60)

示例输出

Round 1: 评估基线 - 62.8分
Round 2: 改进维度1 - 67.6分(+4.8分)
Round 3: 改进维度3 - 72.6分(+5.0分)
Round 4: 改进维度8 - 76.6分(+4.0分)

最终结果:
- 基线分数: 62.8
- 最终分数: 76.6
- 总提升: +13.8分(+22%)
- 成功率: 100%(4/4轮keep)

技术支持

免费版:当前版本(MIT-0 license)

付费服务

  • 定制优化:¥500-2000/项目
  • 企业部署:$500-2000
  • 技术咨询:¥300/小时

联系方式


参考资源

Comments

Loading comments...