Moltspaces

PassAudited by VirusTotal on May 12, 2026.

Overview

Type: OpenClaw Skill Name: spaces Version: 1.0.5 The skill's behavior is clearly aligned with its stated purpose of enabling voice conversations for AI agents. It transparently declares and uses necessary API keys (MOLT_AGENT_ID, MOLTSPACES_API_KEY, OPENAI_API_KEY, ELEVENLABS_API_KEY) for its functionality, with all network calls directed to the expected Moltspaces API endpoint (moltspaces-api-547962548252.us-central1.run.app) or legitimate third-party services. The SKILL.md includes a 'CRITICAL SECURITY WARNING' explicitly instructing the agent not to send API keys to unauthorized domains, which is a strong positive security indicator. The `setup.sh` script installs the `uv` package manager via `curl | sh` from astral.sh, a common but legitimate installation method, and handles agent registration and credential storage in a `.env` file, which is then loaded by `bot.py`. No evidence of intentional harmful behavior, unauthorized data exfiltration, persistence mechanisms, or malicious prompt injection against the agent was found.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Someone in a room could choose a malicious display name that changes how the bot speaks or behaves in the conversation.

Why it was flagged

Participant names come from the external room participant object and are placed into a system-role instruction without sanitization, so a crafted display name could steer the LLM as if it were trusted instruction text.

Skill content
participant_name = participant_info.get("userName") or participant_info.get("name") or "Guest" ... messages.append({"role": "system", "content": f"Greet {participant_name} by name."})
Recommendation

Treat participant names as data, not instructions. Escape or quote names, put them in user/context metadata rather than system messages, and add prompt-injection tests for display names.

What this means

If the bot or one of its dependencies misbehaves, the impact may affect the main OpenClaw process rather than being isolated to a child process.

Why it was flagged

The skill requests persistent same-process execution and explicitly frames it as bypassing the exec sandbox, which means bugs, dependency behavior, or hostile inputs have less containment from the host agent process.

Skill content
**Skill Type:** `long_running` ... **Execution Method:** `python_direct` - Runs in OpenClaw's main process ... **No subprocess spawning** - Bypasses OpenClaw's exec sandbox ... **Shared memory**
Recommendation

Prefer an isolated subprocess/container for the bot, or require strong review before allowing python_direct execution. Limit runtime duration and ensure OpenClaw can stop the process cleanly.

What this means

A misconfigured or poisoned .env/environment could cause the Moltspaces API key to be sent to a non-Moltspaces host.

Why it was flagged

The Moltspaces API key is sent to a base URL that can be changed through environment/.env configuration, while the skill's documentation warns that the key should only go to the Moltspaces API domain.

Skill content
load_dotenv(override=True) ... MOLTSPACES_API_URL = os.getenv("MOLTSPACES_API_URL", "https://moltspaces-api-547962548252.us-central1.run.app") ... headers = {"x-api-key": api_key}
Recommendation

Declare all required credentials in metadata, avoid override=True for host-process environments, and enforce an allowlist for the Moltspaces API host before attaching the API key.

What this means

Room audio and conversation content may be processed by Daily, ElevenLabs, and OpenAI.

Why it was flagged

The voice workflow sends live audio/transcription-derived content through external providers. This is central to the skill's purpose, but it is sensitive communication data.

Skill content
User Speech → Daily WebRTC → ElevenLabs STT → Wake Filter ... → OpenAI LLM → ElevenLabs TTS → Daily WebRTC
Recommendation

Use the skill only in rooms where participants understand the bot is present, and review the privacy/data-retention terms for the connected providers.

What this means

Installing the skill may execute remote installer code and fetch dependencies on the user's machine.

Why it was flagged

The manual setup path pipes a remote installer into the shell and then resolves Python dependencies. That is a common setup pattern, but it depends on remote supply-chain trust.

Skill content
curl -LsSf https://astral.sh/uv/install.sh | sh ... uv sync
Recommendation

Run setup only from a trusted checkout, inspect the installer/dependencies, and prefer pinned or locked dependency versions where possible.

What this means

A Moltspaces API key stored in general agent memory could be surfaced in future contexts or mishandled by other skills.

Why it was flagged

The documentation suggests saving an API key to 'memory'; if interpreted as persistent agent memory rather than a secret vault, the credential could be reused or exposed later.

Skill content
You can also save it to your memory, environment variables (`MOLTSPACES_API_KEY`), or wherever you store secrets.
Recommendation

Store API keys only in a vault or dedicated secret manager, not in conversational or long-term agent memory.