Back to skill
v1.1.0

Cybernetic Evolver

BenignClawScan verdict for this skill. Analyzed May 1, 2026, 4:53 PM.

Analysis

The artifacts look like a purpose-aligned self-learning Python framework with no evident credential access or data exfiltration, but any real-world actions it controls should be tightly scoped.

GuidanceBefore installing or running it, decide what environment the evolver will be allowed to control. It appears safe as a local learning/demo framework, but do not connect its autonomous action loop to sensitive files, accounts, devices, or production systems without explicit approval gates and a clear rollback plan.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

Abnormal behavior control

Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.

Tool Misuse and Exploitation
SeverityLowConfidenceHighStatusNote
SKILL.md
def act(self, action: int) -> Tuple[np.ndarray, float, bool]: ... def evolve(self, n_steps: int): """完整进化循环"""

The skill exposes an autonomous decide/act/evolve loop. This is central to the stated framework, but if connected to a real environment or tools, repeated actions could have effects beyond a toy simulation.

User impactIf you connect this framework to real systems, it may repeatedly choose and execute actions based on its learned policy.
RecommendationUse it first in simulations or tightly scoped environments, and require explicit approval before connecting its action loop to files, accounts, devices, or production systems.
Agentic Supply Chain Vulnerabilities
SeverityInfoConfidenceHighStatusNote
metadata
Source: unknown
Homepage: none
Install specifications: No install spec — this is an instruction-only skill.

The registry metadata provides limited provenance and no install recipe. The supplied artifacts do not show malicious install behavior, but provenance is still something a user should notice before running local code.

User impactYou have less external information to confirm who maintains the skill or where the code originated.
RecommendationReview the included files locally and install any needed Python packages from trusted sources before running the demos or importing the module.
Human-Agent Trust Exploitation
SeverityInfoConfidenceMediumStatusNote
README.md
**稳定性保护**:Lyapunov判据确保进化过程不失控

The documentation frames Lyapunov checks as ensuring the evolution process does not go out of control. That is an algorithmic stability claim, not a general safety guarantee for external actions.

User impactUsers might over-trust the framework's safety if they apply it to real-world actions or tools.
RecommendationTreat the stability checks as optimization safeguards only; add separate permission, rollback, and monitoring controls for any real-world integration.
Sensitive data protection

Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.

Memory and Context Poisoning
SeverityLowConfidenceHighStatusNote
CODE/evolver.py
self.experience_buffer = ExperienceBuffer(capacity=buffer_capacity)
self.online_learner = OnlineLearner(state_dim, learning_rate)

The framework keeps an experience buffer and updates an online learner from supplied states, actions, rewards, and errors. This is purpose-aligned, but learned behavior can be influenced by poisoned or sensitive inputs.

User impactData you feed into the model during a run may affect later decisions in that same run.
RecommendationAvoid feeding unnecessary sensitive data into the state/reward stream, and reset or recreate the evolver between unrelated tasks or trust boundaries.