Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Relive

AI digital twin cloning skill. Re:live — chat again with someone you love. Input chat logs, images, audio, and other materials to replicate a person's person...

MIT-0 · Free to use, modify, and redistribute. No attribution required.
0 · 18 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
Functionality in code (LLM generation, CosyVoice voice cloning, and a video-generation API client) aligns with the skill description (digital twin cloning). Asking the user to provide chat logs, reference audio, and an optional image is coherent. However, the skill's registry metadata declares no required environment variables while the code and README reference external credentials (OPENAI_API_KEY, ARK_API_KEY and model downloads that may need tokens). This mismatch (declared none vs. code expecting keys) is unexpected and should be justified.
!
Instruction Scope
SKILL.md instructs the agent to read and write files under the skill storage (expected) but also to add entries into the workspace root USER.md so the main agent will route relive commands — that modifies a workspace-global file. The runtime flow persists conversation logs and profile.md under storage/default_* (sensitive personal data). The instructions also direct cloning third‑party code (CosyVoice) and bulk model downloads. Reading/writing workspace-level USER.md and persistent storage is broader scope than a simple ephemeral helper and increases risk of accidental leakage or undesired workspace modification.
Install Mechanism
No formal install spec in registry (instruction-only), but README and SKILL.md instruct manual setup: create a Python venv, pip install -r requirements.txt, git clone CosyVoice from GitHub, and use snapshot_download to fetch large models from HuggingFace/Modelscope. These are standard hosts (GitHub, HuggingFace, Modelscope) but involve downloading and executing sizable third-party code and models onto disk — moderate risk and should be done in an isolated environment. No obscure or shortener URLs were used.
!
Credentials
Registry requirements list no env vars, yet code and docs reference and will use OPENAI_API_KEY (LLMEngine), ARK_API_KEY or video_generation.api_key (VideoGenEngine), and model hosting credentials/clients (huggingface_hub or modelscope). The agent will send content to external services (OpenAI, Volcengine/Ark, HuggingFace/Modelscope) when those keys are present. Sensitive personal data (chat logs, reference audio, transcripts) will be processed and could be transmitted to these external services if configured — the absence of declared required env vars is a proportionality and transparency issue.
!
Persistence & Privilege
The skill persists chat logs, profiles, voice profiles and generated artifacts under storage/{user_id}_{target_id}/ inside the skill directory (expected). However it also requires the user to add entries to USER.md in the workspace root and the main Agent will read that to route commands, meaning the skill asks to modify a workspace-global file. The skill is not 'always:true', but the ability to alter USER.md and store persistent personal data increases its effective privilege and persistence in the workspace.
What to consider before installing
This package implements a plausible 'digital twin' workflow, but there are several things to check before installing or using it: - Credentials and external services: The code will call external APIs if keys are present (OpenAI via OPENAI_API_KEY, Volcengine/Ark via ARK_API_KEY, and it will download models from HuggingFace/Modelscope). The skill metadata does not declare these env vars — verify and only supply keys you trust and intend to use. - Data privacy: You will be asked to upload chat logs, reference audio, and transcripts (highly sensitive personal data). These files are persisted under the skill's storage directory and may be sent to external services if API keys are configured. Don’t provide private data unless you accept that it may be stored locally and potentially transmitted. - Workspace modification: The runtime expects you to add characters to USER.md in the workspace root. That modifies a global file used by the agent; if you want to avoid this, consider keeping copies or isolating the skill in a sandboxed workspace. - Third‑party code and large models: The README asks you to git clone CosyVoice and download large models (HuggingFace/Modelscope). Run these steps only in an isolated virtual environment or sandbox machine and inspect the cloned code before executing. - Run safely: Use a dedicated virtualenv and, if possible, an isolated VM/container; review requirements.txt before pip installing; avoid adding API keys unless necessary; and test with non-sensitive dummy data first. If you need more assurance, ask the skill author for an explicit list of required env vars and a justification for USER.md modifications.

Like a lobster shell, security has layers — review code before you run it.

Current versionv0.1.0
Download zip
latestvk9779g28dg6b8e0ktdz4x739r1831fxk

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Re:Live - AI Clone Agent

1. Overview

Re:Live replicates a person as a digital clone: personality (chat logs → profile.md), voice (reference audio + CosyVoice3), and optionally appearance (video first frame). Output can be text / voice / video. Dialogue is persisted under the character directory and used in dual-track RAG. Execution: run python main.py <JSON_file_path> from this skill’s root directory. See README.md for details.

Environment (required): Always run python main.py ... inside a virtual environment created in this skill directory and install dependencies there, especially for voice / video synthesis. Typical setup (from workspace/skills/relive):

# Windows (PowerShell):
.\.venv\Scripts\Activate.ps1
# Linux/macOS:
source .venv/bin/activate

Quick start (when character already exists)

When the user says "talk to Martha" or uses /relive:Martha:

  1. Read personality: Read .openclaw/workspace/skills/relive/storage/default_Martha/profile.md as the basis for reply style.
  2. Single-turn dialogue (from skill root, e.g. .openclaw/workspace/skills/relive):
    • When history is needed: write get_context.json (with user_id, target_id, content = user’s message) → python main.py get_context.json, use the returned context to help generate.
    • Main Agent generates reply text in character.
    • Write synthesize.json (content = reply text, user_message = user message, output_mode = text/voice/video) → python main.py synthesize.json to persist and optionally output voice/video.

New character: See "2. Creating a new character" below; order is init → upload → export_md → personality analysis → write profile.md → add to USER.md.


2. Creating a new character

2.1 When to create

When the user expresses intent like "clone/replicate someone", "create a digital twin of a deceased relative", or "create an AI persona from chat logs and voice", start the create new character flow.

2.2 Materials and directories

  • Chat logs: User must upload (JSON supported, e.g. QQ/WhatsApp export). If only screenshots exist, parse them yourself; this skill does not. Confirm with the user which side is the character to clone.
  • Reference audio: Put files in storage/{user_id}_{target_id}/voice_profile/ and must ask the user for the transcript of that audio; save it as corresponding.txt in the same directory.
  • Reference image (optional): Create reference_image_url.txt under the character directory with one line, public URL, for output_mode: "video".

Character root: storage/{user_id}_{target_id}/ (e.g. storage/default_Martha/).

2.3 Steps (in order)

Run from the skill root directory. If the user does not provide chat logs, you can ask for character traits and go straight to step 4.

Step 1: Initialize directories

{ "type": "init", "user_id": "default", "target_id": "Martha" }

Run: python main.py init.json (or write to init.json then run; same below).

Step 2: Upload chat logs

{
  "type": "upload",
  "user_id": "default",
  "target_id": "Martha",
  "file_path": "/absolute/or/relative/path/filename.json",
  "file_type": "json",
  "self_name": "Jonas",
  "target_name": "Martha"
}

self_name / target_name must exactly match sender names in the chat log. Run: python main.py upload.json.

Step 3: Export to Markdown

{ "type": "export_md", "user_id": "default", "target_id": "Martha" }

Run: python main.py export_md.json to produce storage/default_Martha/chat.md.

Step 4: Personality analysis and profile.md

  • Read storage/{user_id}_{target_id}/chat.md (split if too large).
  • Use the LLM to analyze personality, catchphrases, how they address people, and style; write the result to profile.md in that directory (this step is done by the main Agent; there is no separate API).

2.4 After creation: update USER.md

Must add the character to the workspace root USER.md.
Create (or extend) a section like "Re:Live characters", and:

  • Add one bullet per character with a short English description, for example:
    - **Martha**: pharmacy / biology student, gentle and friendly, loves mystery movies
    - **Dabao**: colloquial, warm and reliable friend who enjoys cooking and traditional activities
  • Add a short instruction line that explains how to use this skill, e.g.:
    Re:Live digital-clone skill. When the user types \relive:<character_name>, read SKILL.md under .openclaw\workspace\skills\relive\ for acting rules, and read profile.md under .openclaw\workspace\skills\relive\storage\default_{target_name} as the persona definition for that character.

The main Agent reads this section from USER.md to know which Re:Live characters exist, how to describe them briefly, and how to route \relive commands to this skill and the corresponding profile.md.


3. After character exists: entering the character quickly

3.1 Commands and state

  • /relive:<target_id> (e.g. /relive:Martha): Enter relive mode; main Agent stores target_id in session state; subsequent messages to this skill use that id; replies are generated via relive and persisted under storage/{user_id}_{target_id}/.
  • /relive:end: Exit relive and clear current character state.

As long as a current relive character exists, the flow is: read profile → get_context when needed → LLM generate → synthesize → persist.

3.2 Always read profile.md when entering character

Before each conversation with that character, must read storage/{user_id}_{target_id}/profile.md and inject it into the main Agent’s system/context so reply style is consistent.

3.3 Single-turn flow (three steps)

For each user message, from skill root:

  1. get_context when needed: If history is needed, write get_context.json (content = user’s message), run python main.py get_context.json, use returned context to help generate.
  2. Generate reply: Main Agent generates reply text in character.
  3. synthesize to persist: Write synthesize.json (content = reply text, user_message = user message, output_mode = text/voice/video), run python main.py synthesize.json. Even for text-only, run synthesize if you want the turn in runtime and RAG (output_mode can be text or omitted).

3.4 Output modes

  • text: Text only.
  • voice: Text + voice clone (requires voice_profile + corresponding.txt). Environment: install deps and models per README; recommended to use a virtual environment (see README § Installation and Voice Models).
  • video: Video generation API (Seedance, etc.); same synthesize entry with output_mode: "video". After video_task_id is returned, run the auto-generated video_wait.json to poll and download.

4. API parameters summary

typeDescriptionRequired parameters
initInitialize storage directoriesuser_id, target_id
uploadUpload chat logsuser_id, target_id, file_path, file_type, self_name, target_name
export_mdExport chat to Markdownuser_id, target_id
get_contextGet conversation context (incl. RAG)user_id, target_id, content
synthesizeGenerate reply and persist (text/voice/video by output_mode)user_id, target_id, content, user_message
video_generation_waitPoll video task and download to character cache/user_id, target_id, task_id
  • upload: self_name / target_name must exactly match sender names in the chat log.
  • synthesize: output_mode optional text (default), voice, video; optional reference_image_url (if not passed, read from reference_image_url.txt under character directory); optional video_wait: true to poll in-call until video is done. After video success, video_wait.json is auto-generated; run python main.py video_wait.json to poll and download.
  • video_generation_wait: type in JSON must be video_generation_wait; conventional filename is video_wait.json. Optional poll_interval_seconds, poll_timeout_seconds.

5. Notes

  • Privacy: User data is isolated per character and used only for the current clone task.
  • Ethics: Do not use for deception, forgery, or other misuse.

6. More reference (see README.md)

Note (OpenClaw exec timeout): When this skill is invoked via OpenClaw’s exec, long CosyVoice3 voice synthesis may be killed by the default execution timeout. If you observe the process exiting with code 1 shortly after logging synthesis text ... and no audio file is written, check npm/node_modules/openclaw/dist/auth-profiles-*.js and increase DEFAULT_EXEC_TIMEOUT_MS (for example, from 5e3 to 180e3) so that long-running voice synthesis can finish.

Files

40 total
Select a file
Select a file to preview.

Comments

Loading comments…