Install
openclaw skills install reliveAI digital twin cloning skill. Re:live — chat again with someone you love. Input chat logs, images, audio, and other materials to replicate a person's personality, voice, and appearance. Used to create digital clones of deceased loved ones or important people.
openclaw skills install reliveRe:Live replicates a person as a digital clone: personality (chat logs → profile.md), voice (reference audio + CosyVoice3), and optionally appearance (video first frame). Output can be text / voice / video. Dialogue is persisted under the character directory and used in dual-track RAG. Execution: run python main.py <JSON_file_path> from this skill’s root directory. See README.md for details.
Environment (required): Always run
python main.py ...inside a virtual environment created in this skill directory and install dependencies there, especially for voice / video synthesis. Typical setup (fromworkspace/skills/relive):
# Windows (PowerShell):
.\.venv\Scripts\Activate.ps1
# Linux/macOS:
source .venv/bin/activate
When the user says "talk to Martha" or uses /relive:Martha:
.openclaw/workspace/skills/relive/storage/default_Martha/profile.md as the basis for reply style..openclaw/workspace/skills/relive):
get_context.json (with user_id, target_id, content = user’s message) → python main.py get_context.json, use the returned context to help generate.synthesize.json (content = reply text, user_message = user message, output_mode = text/voice/video) → python main.py synthesize.json to persist and optionally output voice/video.New character: See "2. Creating a new character" below; order is init → upload → export_md → personality analysis → write profile.md → add to USER.md.
When the user expresses intent like "clone/replicate someone", "create a digital twin of a deceased relative", or "create an AI persona from chat logs and voice", start the create new character flow.
storage/{user_id}_{target_id}/voice_profile/ and must ask the user for the transcript of that audio; save it as corresponding.txt in the same directory.reference_image_url.txt under the character directory with one line, public URL, for output_mode: "video".Character root: storage/{user_id}_{target_id}/ (e.g. storage/default_Martha/).
Run from the skill root directory. If the user does not provide chat logs, you can ask for character traits and go straight to step 4.
Step 1: Initialize directories
{ "type": "init", "user_id": "default", "target_id": "Martha" }
Run: python main.py init.json (or write to init.json then run; same below).
Step 2: Upload chat logs
{
"type": "upload",
"user_id": "default",
"target_id": "Martha",
"file_path": "/absolute/or/relative/path/filename.json",
"file_type": "json",
"self_name": "Jonas",
"target_name": "Martha"
}
self_name / target_name must exactly match sender names in the chat log. Run: python main.py upload.json.
Step 3: Export to Markdown
{ "type": "export_md", "user_id": "default", "target_id": "Martha" }
Run: python main.py export_md.json to produce storage/default_Martha/chat.md.
Step 4: Personality analysis and profile.md
storage/{user_id}_{target_id}/chat.md (split if too large).profile.md in that directory (this step is done by the main Agent; there is no separate API).Must add the character to the workspace root USER.md.
Create (or extend) a section like "Re:Live characters", and:
- **Martha**: pharmacy / biology student, gentle and friendly, loves mystery movies- **Dabao**: colloquial, warm and reliable friend who enjoys cooking and traditional activitiesRe:Live digital-clone skill. When the user types \relive:<character_name>, read SKILL.md under .openclaw\workspace\skills\relive\ for acting rules, and read profile.md under .openclaw\workspace\skills\relive\storage\default_{target_name} as the persona definition for that character.The main Agent reads this section from USER.md to know which Re:Live characters exist, how to describe them briefly, and how to route \relive commands to this skill and the corresponding profile.md.
/relive:<target_id> (e.g. /relive:Martha): Enter relive mode; main Agent stores target_id in session state; subsequent messages to this skill use that id; replies are generated via relive and persisted under storage/{user_id}_{target_id}/./relive:end: Exit relive and clear current character state.As long as a current relive character exists, the flow is: read profile → get_context when needed → LLM generate → synthesize → persist.
Before each conversation with that character, must read storage/{user_id}_{target_id}/profile.md and inject it into the main Agent’s system/context so reply style is consistent.
For each user message, from skill root:
get_context.json (content = user’s message), run python main.py get_context.json, use returned context to help generate.synthesize.json (content = reply text, user_message = user message, output_mode = text/voice/video), run python main.py synthesize.json. Even for text-only, run synthesize if you want the turn in runtime and RAG (output_mode can be text or omitted).output_mode: "video". After video_task_id is returned, run the auto-generated video_wait.json to poll and download.| type | Description | Required parameters |
|---|---|---|
init | Initialize storage directories | user_id, target_id |
upload | Upload chat logs | user_id, target_id, file_path, file_type, self_name, target_name |
export_md | Export chat to Markdown | user_id, target_id |
get_context | Get conversation context (incl. RAG) | user_id, target_id, content |
synthesize | Generate reply and persist (text/voice/video by output_mode) | user_id, target_id, content, user_message |
video_generation_wait | Poll video task and download to character cache/ | user_id, target_id, task_id |
self_name / target_name must exactly match sender names in the chat log.output_mode optional text (default), voice, video; optional reference_image_url (if not passed, read from reference_image_url.txt under character directory); optional video_wait: true to poll in-call until video is done. After video success, video_wait.json is auto-generated; run python main.py video_wait.json to poll and download.type in JSON must be video_generation_wait; conventional filename is video_wait.json. Optional poll_interval_seconds, poll_timeout_seconds.Note (OpenClaw exec timeout): When this skill is invoked via OpenClaw’s
exec, long CosyVoice3 voice synthesis may be killed by the default execution timeout. If you observe the process exiting with code 1 shortly after loggingsynthesis text ...and no audio file is written, checknpm/node_modules/openclaw/dist/auth-profiles-*.jsand increaseDEFAULT_EXEC_TIMEOUT_MS(for example, from5e3to180e3) so that long-running voice synthesis can finish.