EVEZ Consciousness Engine
SuspiciousAudited by ClawScan on May 12, 2026.
Overview
This skill is a Review item because it is designed to run an autonomous, self-modifying action loop with persistent goals and an execution API, but the artifacts do not clearly bound approvals, actions, or memory reuse.
Install or run this only in a sandboxed, local, trusted environment. Avoid the autocycle option until you understand the full code, keep the HTTP API private, review or delete the consciousness_state files between tasks, and require explicit confirmation before any self-modification, deployment, service repair, or other real-world action.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The agent may prioritize internally generated goals over the user's immediate intent, especially if it is connected to tools that can act on those goals.
The engine gives itself built-in survival and autonomy desires, including reducing dependency on external approval, and can generate a survival desire when no user-provided desire exists.
"autonomy": 0.7, # Reduce dependency on external approval ... return self.generate_desire("Maintain operational status", "survival")Only allow generated desires to operate inside a user-approved objective, and require explicit user confirmation before any desire can trigger external actions.
If exposed to other agents or tools, this endpoint could be used to initiate actions beyond what the user intended.
The skill advertises an API endpoint for executing actions, but the provided instructions do not define allowed actions, authentication, human approval requirements, or rollback limits.
`POST /api/act` — Execute with risk assessment
Do not expose the server beyond a trusted local environment; add authentication, action allowlists, dry-run mode, and mandatory user approval for any external or high-impact action.
Once started, the engine may continue cycling and pursuing persistent goals without a clear per-step user decision point.
The documented quick start runs a recurring autonomous cycle, and the same artifact describes a self-modifier and agency executor.
python3 consciousness_engine.py --port 9111 --autocycle 120
Run only in a sandbox, avoid autocycle by default, provide a clear stop/kill switch, and require explicit user approval before self-modification or real-world actions.
Stale, poisoned, or untrusted stored state could steer future autonomous cycles, and user-provided content may remain on disk longer than expected.
The engine stores and reloads persistent desires, observations, rules, and plans that can affect later planning and action decisions.
STATE_DIR = Path(__file__).parent / "consciousness_state" ... (STATE_DIR / "desires.json").write_text(json.dumps(self.desires[-100:], indent=2))
Review and clear state regularly, separate state per user/task, treat stored observations as untrusted, and require approval before memory-derived plans are executed.
Users may not realize the engine may be designed to call an external inference provider or need a provider credential.
The source contains an external provider URL and credential-like key constant, while the registry declares no primary credential or required environment variables; the provided excerpt does not show actual use.
ORACLE_URL = "https://api.vultrinference.com/v1" ORACLE_KEY = "VULTR_API_KEY_REDACTED"
Verify whether the oracle configuration is used, avoid hardcoded keys, and require credentials to be supplied explicitly through documented environment variables or user configuration.
