Multi-Modal Content Creator

PassAudited by ClawScan on May 11, 2026.

Overview

The skill’s behavior matches its stated purpose, but it uses OpenAI and WhatsApp-style credentials, sends customer content to external AI services, and can batch auto-reply when the user runs it.

This skill appears coherent and not malicious from the provided artifacts. Install it only if you are comfortable providing an OpenAI API key and WhatsApp token, sending customer content to OpenAI, and running a workflow that can automatically process and reply to multiple messages. For real customer use, add confirmation, limits, and credential hygiene.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Running the batch command may consume OpenAI credits and send generated replies for multiple customer messages without a per-message confirmation step.

Why it was flagged

When the user runs the batch workflow, the code processes every message returned by the WhatsApp client; `process_whatsapp_request` then generates an image and calls `wa_client.send_message(...)`.

Skill content
messages = wa_client.list_messages()

for msg in messages:
    # In production, track read/unread status
    process_whatsapp_request(msg)
Recommendation

Before connecting this to a real WhatsApp sender, consider adding a dry-run mode, recipient allowlist, rate limits, read/unread tracking, and per-message approval for outgoing media.

What this means

Anyone with access to that config file may be able to reuse the stored WhatsApp token, depending on the real service behind the token.

Why it was flagged

The WhatsApp auth token is persisted in a local config file so later commands can act through the user’s messaging account.

Skill content
self.config_path = config_path or os.path.expanduser("~/.wacli/config.json")
...
self.config["auth_token"] = auth_token
self.save_config()
Recommendation

Use a minimally scoped token if available, keep the config file private, and delete `~/.wacli/config.json` when the skill is no longer needed.

What this means

Customer voice notes, text prompts, and generated prompt context may leave the local environment and be processed by OpenAI.

Why it was flagged

Audio chunks are sent to OpenAI’s Whisper API for transcription; image prompts are similarly sent to OpenAI’s image API in the companion generator.

Skill content
transcript = client.audio.transcriptions.create(
    model="whisper-1",
    file=buffer,
    response_format="text",
)
Recommendation

Use this only with content you are allowed to send to OpenAI, and review OpenAI retention/privacy settings for your account.

What this means

Dependency behavior could change over time, even though these packages are consistent with the skill’s purpose.

Why it was flagged

The Python dependencies are specified with lower bounds rather than exact pinned versions, so future installs may resolve to newer package versions.

Skill content
openai>=1.0.0
pydub>=0.25.1
requests>=2.31.0
python-dotenv>=1.0.0
Recommendation

For production use, pin exact versions or use a reviewed lockfile.