Cybernetic Evolver
Analysis
The artifacts look like a purpose-aligned self-learning Python framework with no evident credential access or data exfiltration, but any real-world actions it controls should be tightly scoped.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.
def act(self, action: int) -> Tuple[np.ndarray, float, bool]: ... def evolve(self, n_steps: int): """完整进化循环"""
The skill exposes an autonomous decide/act/evolve loop. This is central to the stated framework, but if connected to a real environment or tools, repeated actions could have effects beyond a toy simulation.
Source: unknown Homepage: none Install specifications: No install spec — this is an instruction-only skill.
The registry metadata provides limited provenance and no install recipe. The supplied artifacts do not show malicious install behavior, but provenance is still something a user should notice before running local code.
**稳定性保护**:Lyapunov判据确保进化过程不失控
The documentation frames Lyapunov checks as ensuring the evolution process does not go out of control. That is an algorithmic stability claim, not a general safety guarantee for external actions.
Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.
self.experience_buffer = ExperienceBuffer(capacity=buffer_capacity) self.online_learner = OnlineLearner(state_dim, learning_rate)
The framework keeps an experience buffer and updates an online learner from supplied states, actions, rewards, and errors. This is purpose-aligned, but learned behavior can be influenced by poisoned or sensitive inputs.
