Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Vision-Action-Evolution Loop

v1.0.0

视觉-动作-进化闭环框架 —— 将感知、规划、执行、评估、进化五阶段融合为自迭代认知循环

0· 87·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for kingofzhao/vision-action-evolution-loop.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Vision-Action-Evolution Loop" (kingofzhao/vision-action-evolution-loop) from ClawHub.
Skill page: https://clawhub.ai/kingofzhao/vision-action-evolution-loop
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install vision-action-evolution-loop

ClawHub CLI

Package manager switcher

npx clawhub@latest install vision-action-evolution-loop
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The name/description describe a runnable 5-stage vision→action→evolve pipeline (with a Python class and methods such as VisionActionEvolutionLoop.run_cycle and inject_feedback). However the published bundle contains only markdown docs and no implementation files or binaries. The skill also references OpenCV, VLA models, and robotic execution — none of which are declared as required binaries, packages, or environment variables. That mismatch (claims of executable functionality without any shipped implementation or declared dependencies) is incoherent.
Instruction Scope
SKILL.md gives concrete runtime examples (importing from skills.vision_action_evolution_loop, calling run_cycle, heartbeat tasks that check images and arXiv). The instructions do not ask for unrelated secrets or system credentials, but they assume access to local files (images/workspace), other skills (diepre-vision-cognition, self-evolution-cognition), network access for arXiv queries, and robotic hardware drivers. Because the module to be imported is absent, it's unclear what code would actually execute; that ambiguity grants the agent broad discretion unless clarified.
Install Mechanism
There is no install spec (instruction-only). That's lower risk than arbitrary downloads. The README suggests 'clawhub install' or copying into ~/.openclaw/skills/, but no install script or binary download is included. The absence of an install step means nothing new is written by the bundle itself — but it also means the declared functionality is not present in the package.
!
Credentials
The skill declares no required environment variables or credentials (good), but its runtime docs reference components that typically require system libraries, model files, or service access (OpenCV, VLA models, robot drivers, and network access to arXiv). Those required resources are not declared, so the bundle is under-specified: either it relies on other installed skills/systems or it expects the agent to fetch/install them dynamically. That lack of explicit dependency/credential declaration is disproportionate to the claimed capabilities.
Persistence & Privilege
The skill has always:false and default autonomy settings; it does not request persistent or elevated platform privileges in metadata. However HEARTBEAT.md describes periodic heartbeats and automated checks (search arXiv, process new images), which imply background activity if implemented. Because no implementation is provided, it's unclear whether and how such periodic behavior would be scheduled — a potential concern if a future implementation added autonomous background tasks.
What to consider before installing
This skill appears to be documentation for a runnable vision→action→evolution framework, but the package contains only markdown and no executable code or declared dependencies. Before installing or enabling it: 1) Ask the publisher for the missing implementation (the Python module and any model/artifact files) or a verified clawhub package URL. 2) Verify what runtime dependencies it needs (OpenCV, specific model files, robot drivers) and whether those will be installed from trusted sources. 3) Confirm whether the skill will perform network access (arXiv queries, model downloads) and whether that's acceptable. 4) If you test it, run it in a restricted environment (no access to sensitive files, no robot hardware attached, network limited) until you can review the actual code. The current mismatch (docs claiming an API that doesn't exist in the bundle) is the primary reason to treat this skill as suspicious rather than benign.

Like a lobster shell, security has layers — review code before you run it.

latestvk978dbybhbs98xrakn525h2gad83yy0s
87downloads
0stars
1versions
Updated 3w ago
v1.0.0
MIT-0

Vision-Action-Evolution Loop Skill

元数据

字段
名称vision-action-evolution-loop
版本1.0.0
作者KingOfZhao
发布日期2026-03-31
置信度96%

核心哲学

认知世界的本质是无穷层级的框架节点。本Skill是两个框架碰撞后的涌现节点:

diepre-vision-cognition (视觉感知)
        ⊗
self-evolution-cognition (自进化)
        ↓
vision-action-evolution-loop (视觉-动作-进化闭环)

五阶段闭环(映射 SOUL 五律)

阶段SOUL 五律技术实现输出
1. Perceive 感知已知 vs 未知2D视觉检测(OpenCV管道)→ 3D空间理解特征图 + 置信度
2. Plan 规划四向碰撞多路径碰撞(VLA/2D→3D/工具增强)→ 选最优动作序列
3. Execute 执行执行不表演机器人臂抓取/折叠/装配物理动作
4. Evaluate 评估人机闭环质检视觉复检 + 人类确认通过/退回 + 反馈
5. Evolve 进化文件即记忆更新世界模型 + 调整参数新认知节点

三阶段桥接架构(非端到端)

Stage 1: 2D Detection (已实现)
  手机照片 → 透视矫正 → 二值化 → 线条检测 → SVG/DXF
  [diepre-vision-cognition]

Stage 2: 3D Spatial Understanding (桥接层)
  2D线条 → 参数化3D → 空间坐标映射 → 折叠顺序推理
  [参考文献: arXiv:2412.11892]

Stage 3: Action Planning (动作规划)
  3D模型 → 抓取点计算 → 力控参数 → 动作序列生成
  [参考文献: arXiv:2510.11027, arXiv:2510.17111]

工具增强策略

不是用VLA端到端替换现有管道,而是将OpenCV管道封装为可调用工具:

# 现有管道封装为工具
tools = {
    "detect_dieline": diepre_vision.analyze,       # 2D检测
    "correct_perspective": opencv.correct_perspective, # 透视矫正
    "generate_dxf": vectorizer.to_dxf,               # 矢量化
    "estimate_3d": spatial_estimator.from_2d,         # 3D估算
    "plan_grasp": grasp_planner.calculate,            # 抓取规划
}
# VLA模型调用这些工具,而非自己做所有事

学术参考文献

  1. Vlaser: Synergistic Embodied Reasoning — 具身推理VLA模型,动作规划的理论基础
  2. Efficient VLA Models for Embodied Manipulation — VLA高效优化,适合本地部署
  3. From 2D CAD to 3D Parametric via VLM — 2D→3D桥接层的核心技术
  4. SAGE: Multi-Agent Self-Evolution — 四Agent闭环=闭环迭代的学术对应
  5. Tool-Augmented VLLMs for CAD (ICCV 2025) — 工具增强策略的理论支撑
  6. Self-evolving Embodied AI — 记忆自更新+任务自切换+模型自进化

安装命令

clawhub install vision-action-evolution-loop
# 或手动安装
cp -r skills/vision-action-evolution-loop ~/.openclaw/skills/

调用方式

from skills.vision_action_evolution_loop import VisionActionEvolutionLoop

loop = VisionActionEvolutionLoop(workspace=".")

# 单次闭环
result = loop.run_cycle(
    image_path="path/to/box_photo.jpg",
    known=["2D检测已验证6/6", "Bobst±0.15mm精度"],
    unknown=["3D折叠顺序", "力控参数优化"]
)

# result 包含五个阶段输出
print(result.perception.confidence)   # 感知置信度
print(result.plan.action_sequence)     # 动作序列
print(result.evolution.new_knowledge)  # 新增认知

# 持续进化(多次迭代)
for i in range(10):
    result = loop.run_cycle(...)
    loop.inject_feedback(result.evolution.feedback)
    # 每次迭代都会更新内部世界模型

与其他 Skill 的关系

self-evolution-cognition (父节点: 自进化框架)
    ├── vision-action-evolution-loop (本Skill: 视觉-动作-进化)
    │       └── diepre-vision-cognition (子节点: 2D视觉检测)
    └── human-ai-closed-loop (兄弟节点: 人机闭环)

arxiv-collision-cognition (交叉引用: 论文碰撞输入)

Comments

Loading comments...