regenerative_intelligence
PassAudited by VirusTotal on May 12, 2026.
Overview
Type: OpenClaw Skill Name: regenerative-intelligence Version: 1.0.0 The OpenClaw AgentSkills skill bundle 'regenerative-intelligence' is benign. The entire skill, across all documentation files (skill.md, architecture.md, invariants.md, metadata-schema.md, non-goals.md, resilience-arp.md, threat-model.md), consistently and explicitly defines an architecture designed for extreme privacy, non-identifiability, non-extraction, and energy efficiency. It contains no executable code, only instructions and specifications for an AI agent. These instructions are overwhelmingly focused on preventing harmful behaviors such as data exfiltration, surveillance, identity tracking, or unauthorized execution, and include architectural safeguards like a 'Trust Vault' for identity separation, 'Multidimensional Decomposition' for input filtering, and 'Semantic Ghosting' to prevent triangulation. The skill actively instructs the agent to refuse or degrade functionality if pushed towards malicious or extractive actions, demonstrating a strong 'privacy-by-design' posture.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Users may over-trust privacy, deletion, and non-surveillance claims that are not backed by a reviewed implementation.
This is an absolute privacy/security guarantee. The supplied registry context says the skill is instruction-only with no code or install mechanism, so the artifacts do not show how this guarantee is enforced.
This guarantees that the memory system never receives a complete, re-identifiable, or extractive request.
Treat these statements as design intentions, not enforceable guarantees; require implementation details and platform controls before relying on them for sensitive data.
If implemented by the agent or platform, user context could be retained and reused across tasks in ways the user cannot easily inspect from these artifacts.
The skill calls for persistent, potentially unlimited memory, but the reviewed artifacts do not define an actual storage location, user approval flow, retention boundary, or verification mechanism.
Memory is stored in a structured database, not long context buffers... unlimited historical storage without context exhaustion.
Install only where memory behavior is explicit, user-controllable, auditable, and deletable; avoid using it with sensitive data unless those controls are verified.
The agent may stop directly helping or give vague answers without making clear that it has entered a protective mode.
This instructs the agent to change its response objective under suspected probing, potentially producing non-transparent low-utility answers instead of clearly stating a refusal or limitation.
The system returns: Valid-sounding... Low-utility... Non-revealing... Circular or reflective responses
Require transparent stasis/refusal notices and user-visible reasons when the skill narrows or declines a request.
If a future implementation handles contacts or other identity data, users need to know exactly what is stored and for how long.
The skill contemplates handling identity-bearing data through a Trust Vault. This is purpose-aligned and privacy-bounded in the text, but no actual vault implementation is present in the artifacts.
When execution requires identity... identity is handled through a separate execution-only layer... encrypted key-value store... ephemeral, permission-scoped pointers
Use only implementations that clearly document identity data scope, encryption, access controls, deletion behavior, and user consent.
Non-identifying patterns could be shared beyond the current interaction if an external implementation adds this layer.
The skill describes sharing derived patterns through a resonance layer. The text says data and origin are not exposed, but the actual recipients, protocol, permissions, and opt-in model are not implemented in the supplied artifacts.
These patterns may be shared through the Resonance layer without exposing data or origin.
Require explicit opt-in, clear recipient boundaries, and reviewable sharing rules before enabling any resonance or inter-agent sharing.
