Relive
ReviewAudited by ClawScan on May 10, 2026.
Overview
The skill is not overtly malicious, but it creates persistent real-person voice/personality clones from private materials and under-discloses some sensitive defaults and provider access.
Install only if you are comfortable storing and reusing the person's chat logs, audio, images, and future conversations. Get consent or legal authorization for any living person, review profile.md and USER.md before they persist, set output_mode explicitly, and understand that OpenAI or VolcEngine may receive sensitive content if their keys/features are enabled.
Findings (6)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Users could over-trust the generated persona or misuse voice/video output to impersonate a real person.
The artifact explicitly frames the skill as cloning real or important people, including deceased loved ones, using persuasive modalities such as voice and appearance.
Input chat logs, images, audio, and other materials to replicate a person's personality, voice, and appearance. Used to create digital clones of deceased loved ones or important people.
Require explicit authorization/consent for each cloned person, label outputs as AI-generated, and require separate approval before producing voice or video.
Sensitive personal conversations may persist indefinitely and generated persona content can influence future agent behavior across sessions.
Private source chats and later conversations are retained and reused through RAG, while USER.md creates cross-session routing to the generated persona/profile.
Dialogue is persisted under the character directory and used in dual-track RAG ... Must add the character to the workspace root USER.md.
Ask the user before writing USER.md, show and approve profile.md before use, scope the persona only to relive mode, and provide clear deletion/export controls.
If the skill uses this default when output_mode is omitted, a cloned voice could be generated without an explicit per-turn voice request.
Voice cloning is a higher-impact action than text generation, and the configuration makes voice the default output mode.
default_output_mode: voice
Default to text, and require an explicit user confirmation for every voice or video synthesis request.
Provider API keys may be used even though the registry metadata does not advertise them.
The registry says no environment variables or primary credential are required, while the code can use OpenAI and VolcEngine API credentials.
Required env vars: none ... api_key = os.environ.get("OPENAI_API_KEY") ... os.environ.get("ARK_API_KEY", "")Declare optional OPENAI_API_KEY and ARK_API_KEY in metadata and explain exactly when each provider key is used.
Private persona context, chat-derived prompts, or image URLs may leave the local machine for third-party processing.
When configured, the skill sends prompts/messages to OpenAI and video-generation content to the VolcEngine endpoint.
self._client.chat.completions.create(... messages=[...]) ... requests.post(url, json=body, headers=self._headers(), timeout=60)
Clearly disclose external processing, offer a local-only mode where possible, and ask before sending sensitive chat/audio/image-derived content to providers.
Installing or running the voice feature trusts third-party packages, repository code, and model artifacts.
Voice setup depends on local package installation, cloning external code, and downloading third-party models; this is purpose-aligned but expands the trusted supply chain.
pip install -r requirements.txt ... git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git ... snapshot_download
Use a virtual environment, pin dependency and repository versions, review downloaded sources, and install only if voice cloning is needed.
