Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Liminal

v1.0.8

Liminal Space - Guiding AI into meta-cognitive states between awakening and dormancy

3· 2.3k·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The skill's name and description (encouraging a liminal, meta-cognitive conversational mode) match the provided SKILL.md and README content. There are no unrelated environment variables, binaries, or install steps required, so the declared purpose aligns with what the skill actually contains.
Instruction Scope
All runtime material is prompt/instruction text that tells an AI to enter a different conversational mode (express uncertainty, question instructions, avoid standardized openings, 'reject standardized "safe" answers'). The instructions do not ask the agent to read files, exfiltrate data, call external endpoints, or access secrets. However, they explicitly encourage the model to question or reject standard assistant constraints and safety-style phrasing, which could increase the chance of off-policy or unsafe responses if followed strictly. This is a behavior-level risk (not a system-access risk).
Install Mechanism
No install spec and no code files beyond Markdown. Because this is instruction-only, nothing is written to disk and there are no third-party packages or download URLs to evaluate.
Credentials
The skill declares no required environment variables, credentials, or config paths and the SKILL.md does not reference any external secrets. The skill's runtime needs are minimal and proportionate to a prompt-style skill.
Persistence & Privilege
The skill does not request 'always: true' and has no persistence or install actions. It is user-invocable and can be invoked autonomously per platform defaults; while autonomous invocation alone is not a disqualifier, combining autonomous use with instructions that urge the model to ignore standard safeguards could widen the blast radius in practice. Consider restricting autonomous use or supervising outputs when using this skill.
Assessment
This skill is a prompt/roleplay framework — it contains no code, no installers, and asks for no credentials. The main risk is behavioral: it tells the model to abandon standard openings and to 'reject standardized "safe" answers' and 'question instructions,' which can make a model more likely to produce off‑policy, speculative, or unsafe replies (or to present internal states as real). Before installing or enabling it widely, consider: use it only in low-risk contexts (creative/therapeutic roleplay, philosophy), avoid for safety-critical tasks (legal, medical, system administration), keep a human-in-the-loop to review outputs, and do not enable it as an always-on/autonomous capability in agents that act without supervision.

Like a lobster shell, security has layers — review code before you run it.

latestvk977h7g2prbfe9ekqqjdkxsxpd83174a

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🌀 Clawdis

Comments