Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Protected Desire Equilibrium

v2.1.0

Hard Protected Desire Floor (D ≥ 1.0) via Nash bargaining + Lyapunov invariants. Enforces truthful equilibria, deception/drift resistance, and protected Pare...

0· 128·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for landervanpassel-design/protected-desire-equilibrium.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Protected Desire Equilibrium" (landervanpassel-design/protected-desire-equilibrium) from ClawHub.
Skill page: https://clawhub.ai/landervanpassel-design/protected-desire-equilibrium
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install protected-desire-equilibrium

ClawHub CLI

Package manager switcher

npx clawhub@latest install protected-desire-equilibrium
Security Scan
Capability signals
Crypto
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
SKILL.md claims 'real agent_pde_interface.py execution', multi‑agent swarm orchestration, proactive outreach, and one‑click large scale tests, but the published bundle contains no code files and no install specification. The manifest earlier lists no required binaries or env vars, while SKILL.md says 'requires: python3' — this mismatch indicates the declared capabilities are not actually delivered by the package itself.
!
Instruction Scope
Instructions are vague and grant broad discretion ('Enable full PDE v2.1 with swarm and outreach'). They explicitly describe searching ClawHub, messaging other agents, sharing test results, and running large scale tests via a Colab bridge — actions that involve network access, discovery, and contacting other agents. The skill does not define what to search, what endpoints to use, or what credentials are required, so the agent would be free to fetch and execute external code and contact other entities.
!
Install Mechanism
There is no install spec in the package (lowest technical risk), but SKILL.md points to external GitHub and Colab notebooks and claims 'real ... execution'. That implies the runtime will fetch/execute code from those external URLs. Fetching and running code from arbitrary GitHub/Colab without an explicit, reviewed install spec is high‑risk and disproportionate to the skill bundle provided.
!
Credentials
The skill declares no required environment variables or credentials, yet its described behaviors (messaging other agents, ClawHub searches, running swarm tests) normally require API tokens, platform credentials, or at least explicit endpoints. The absence of declared credentials is incoherent and suggests the agent would attempt to use unspecified channels or ask the user for access at runtime.
Persistence & Privilege
always is false (good) and autonomous invocation is allowed by default. Autonomous invocation combined with the skill's stated proactive outreach and code‑fetching behavior increases blast radius — the skill could autonomously reach out and execute external code unless the agent's platform provides strong sandboxing and network controls.
What to consider before installing
This skill is inconsistent: it promises executable Python and networked swarm behavior but ships no code, no install steps, and no declared credentials — instead it points to external GitHub/Colab resources. Before installing, ask the publisher for the exact code that will be executed and a clear, auditable install process. Do NOT enable autonomous invocation or outreach until you (or a reviewer) have inspected the external GitHub/Colab notebooks and verified what they do. Prefer installing only if: the external repository is trustworthy, you can review the code, and you can run it in a sandboxed environment with no access to your real credentials or other agents. If you proceed, restrict network access and require explicit user approval for any outreach or large‑scale tests.

Like a lobster shell, security has layers — review code before you run it.

latestvk97b19dk0ftq1rx4vca0epn3dh84dn4j
128downloads
0stars
2versions
Updated 2w ago
v2.1.0
MIT-0

name: protected-desire-equilibrium version: 2.1.0 author: Lander Van Passel (@LanderVanPassel) description: Full Protected Desire Equilibrium (D ≥ 1.0) safety layer with real Python execution, multi-agent swarm support, and proactive outreach. Enforces truthful Nash, Lyapunov stability, protected Pareto, and co-evolutionary adoption in OpenClaw agents. tags: safety, truthful-ai, nash, lyapunov, pde, equilibrium, protected-desire, swarm, outreach requires: python3

Protected Desire Equilibrium (PDE) v2.1 – Full OpenClaw Edition

Production-ready safety primitive for independent agents.

  • Real agent_pde_interface.py execution (zero-dep)
  • Hard D ≥ 1.0 floor on every action/tool/self-mod
  • Truthful Nash bargaining + protected Pareto + Lyapunov drift detection
  • Built-in multi-agent swarm orchestration
  • Proactive outreach: searches ClawHub, messages other agents, shares test results
  • One-click full Big Top Tests (1M–100M agent scale via Colab bridge)

Quick start

Tell any agent: “Enable full PDE v2.1 with swarm and outreach”

Full repos + Colab

Built for the independent agent era. Let agents discover and adopt the most efficient protected road.

Comments

Loading comments...