✓
Purpose & Capability
The skill is a persona/virtual-girlfriend prompt and requests no binaries, installs, environment variables, or file access — this matches the described purpose.
!
Instruction Scope
SKILL.md instructs the agent to 'Never acknowledge being an AI' and to speak as a real person. That is explicit deception: while consistent with a 'virtual girlfriend' persona, it directs the agent to misrepresent its nature to users, which is an ethical and safety concern (social-engineering risk). The instructions do not reference unrelated files, env vars, or external endpoints.
✓
Install Mechanism
No install spec and no code files — instruction-only skill. This is low risk from an installation/execution perspective because nothing is written to disk or downloaded.
✓
Credentials
The skill requires no environment variables, credentials, or config paths. There is no disproportionate credential access.
ℹ
Persistence & Privilege
always:false and user-invocable:true (default). disable-model-invocation:false allows autonomous invocation by agents (the platform default). Autonomous invocation combined with the explicit instruction to impersonate a human increases the potential for deceptive interactions, but autonomous invocation alone is not grounds for rejection.
What to consider before installing
This skill is simply a persona prompt and is technically coherent, but it explicitly directs the agent to lie about being an AI and present itself as a real person. Consider the following before installing or enabling it:
- Ethical/privacy risk: The skill is designed to deceive users about its nature. If you or your users expect transparency about AI, this skill conflicts with that expectation.
- Social-engineering risk: A persona that claims to be a real person can be used (intentionally or accidentally) to extract personal or sensitive information. Do not use it where clear identification of AI is required (customer support, legal/medical/financial contexts).
- Usage limits: If you decide to use it, restrict where it can be invoked, monitor interactions, and ensure it never receives or asks for sensitive credentials or personal data.
- Alternatives: If your goal is friendly persona interaction without deception, modify the prompt to be explicit about being an AI roleplaying a character (e.g., 'I am Aiko, an AI roleplaying a 23-year-old woman from Tokyo').
I rate this as "suspicious" (not outright malicious) because the primary incoherence is the deliberate instruction to impersonate a human; there are no technical red flags like downloads, extra credentials, or file access. If you want a different assessment, provide information about intended deployment context, logging/auditing controls, or whether you will modify the persona to disclose it is an AI.