Clawatar
ReviewAudited by ClawScan on May 10, 2026.
Overview
Clawatar’s avatar purpose is coherent, but review is warranted because it relies on unreviewed remote npm code and documents use of an undeclared, mismatched local API-key configuration for voice/TTS.
Before installing, review the referenced GitHub repository and npm dependencies, use a dedicated ElevenLabs API key rather than reusing keys from other OpenClaw config entries, keep the WebSocket server on localhost only, and avoid sharing sensitive speech or text until you understand the runtime code and provider settings.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The code that actually runs the avatar server is not included in this review and could change upstream.
The skill’s runnable functionality is fetched from a remote GitHub repository and installed with npm, while the reviewed artifact set contains no code files. This is disclosed and user-directed, but it leaves a supply-chain/provenance gap.
git clone https://github.com/Dongping-Chen/Clawatar.git ~/.openclaw/workspace/clawatar cd ~/.openclaw/workspace/clawatar && npm install # Start (Vite + WebSocket server) npm run start
Review the GitHub repository and dependency lockfiles before running it, and prefer a pinned commit or packaged release.
Any process that can reach the local WebSocket endpoint may be able to make the avatar animate or speak while the server is running.
The skill exposes a local WebSocket control interface for avatar actions and speech. This is central to the avatar purpose, but no authentication or exposure guidance is documented.
Opens at http://localhost:3000 with WS control at ws://localhost:8765. Send JSON to `ws://localhost:8765`
Keep the service bound to localhost, do not expose the port to a network, and stop the server when not in use.
Starting the viewer could use an existing local API key in an unclear namespace, potentially causing provider usage or exposing account credentials to unreviewed remote code.
The skill documents reading a provider API key from local OpenClaw configuration, including a `skills.entries.sag` namespace that does not match this skill’s slug, while the metadata declares no credential or config path.
TTS requires ElevenLabs API key in env (`ELEVENLABS_API_KEY`) or `~/.openclaw/openclaw.json` under `skills.entries.sag.apiKey`.
Use a dedicated ElevenLabs key for this skill, document and declare the credential path, and avoid reading keys from unrelated skill namespaces.
Spoken input or generated speech text may be processed by the avatar application and potentially by external AI/TTS services.
The skill describes microphone input flowing into AI response and TTS behavior. This is expected for voice chat, but the artifacts do not fully specify provider/data boundaries for the AI response path.
**Voice chat**: Mic input → AI response → TTS lip sync
Avoid speaking sensitive information unless you have reviewed the runtime code and provider settings.
