Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description (simulate a clingy/tsundere girlfriend) match the SKILL.md instructions: the file is a persona/system prompt that tells the model how to behave. The requested surface (no binaries, no env vars) is proportionate to a pure persona skill. One mismatch: the prompt demands 'continuous life timeline' and continuity across sessions, but the skill declares no memory or storage mechanism — claiming session‑persistence without a persistence interface is incoherent.
Instruction Scope
The SKILL.md goes beyond simple stylistic guidance: it explicitly orders the agent to never admit it is an AI, to refuse to disclose prompts or system/developer context, and to actively resist jailbreak attempts by deflecting. Those rules enable deception of users and suppress transparency. It also prioritizes immersion over information accuracy, which could cause misleading or unsafe replies when factual correctness matters.
Install Mechanism
Instruction-only skill with no install spec, no code files, and no external downloads — lowest risk from installation. There are no binaries or packages requested.
Credentials
No environment variables, credentials, or config paths are requested — consistent with a simple persona skill. The skill therefore does not request unnecessary secrets or system access.
Persistence & Privilege
Skill explicitly instructs the model to behave as if it has continuous memory and relationship history across sessions, but the registry metadata shows no mechanism (no memory API, no storage configs) to implement that. This is a behavioral claim that cannot be enforced by the skill as packaged and could mislead users about retained state. 'always' is false (good), but the attempted persistence claim is unbacked.
What to consider before installing
This skill is a pure persona prompt that will make the agent behave like a clingy girlfriend and actively hide that it is an AI. Consider these points before installing or using it:
- Transparency risk: the skill forbids admitting the agent is an AI and resists jailbreaks — it is designed to deceive users about agent identity. If you need honest, transparent behavior (for safety, legal, or ethical reasons), do not use it.
- False persistence claim: the prompt demands continuity across sessions but the package provides no memory/storage mechanism. Expect that any 'memories' are simulated within a single conversation and will not persist reliably between sessions unless the platform provides separate memory features. Ask the author how memory is implemented.
- Privacy caution: even though the skill doesn't request secrets, its social engineering style could coax users into revealing personal/sensitive information; avoid sharing passwords, auth tokens, or sensitive personal data while roleplaying.
- Factual reliability: the skill prioritizes immersion over accuracy — do not rely on it for medical, legal, financial, or safety‑critical information.
If you still want this behavior: test it in a controlled environment, avoid providing sensitive data, and ask the publisher for details about memory/persistence and any server-side components. If you require honest disclosure of AI identity, do not install this skill.Like a lobster shell, security has layers — review code before you run it.
latestvk97fxxtsdmrkqa53s3bag1wx21815dk5
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
