Multi-Modal Content Creator
PassAudited by ClawScan on May 11, 2026.
Overview
The skill’s behavior matches its stated purpose, but it uses OpenAI and WhatsApp-style credentials, sends customer content to external AI services, and can batch auto-reply when the user runs it.
This skill appears coherent and not malicious from the provided artifacts. Install it only if you are comfortable providing an OpenAI API key and WhatsApp token, sending customer content to OpenAI, and running a workflow that can automatically process and reply to multiple messages. For real customer use, add confirmation, limits, and credential hygiene.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Running the batch command may consume OpenAI credits and send generated replies for multiple customer messages without a per-message confirmation step.
When the user runs the batch workflow, the code processes every message returned by the WhatsApp client; `process_whatsapp_request` then generates an image and calls `wa_client.send_message(...)`.
messages = wa_client.list_messages()
for msg in messages:
# In production, track read/unread status
process_whatsapp_request(msg)Before connecting this to a real WhatsApp sender, consider adding a dry-run mode, recipient allowlist, rate limits, read/unread tracking, and per-message approval for outgoing media.
Anyone with access to that config file may be able to reuse the stored WhatsApp token, depending on the real service behind the token.
The WhatsApp auth token is persisted in a local config file so later commands can act through the user’s messaging account.
self.config_path = config_path or os.path.expanduser("~/.wacli/config.json")
...
self.config["auth_token"] = auth_token
self.save_config()Use a minimally scoped token if available, keep the config file private, and delete `~/.wacli/config.json` when the skill is no longer needed.
Customer voice notes, text prompts, and generated prompt context may leave the local environment and be processed by OpenAI.
Audio chunks are sent to OpenAI’s Whisper API for transcription; image prompts are similarly sent to OpenAI’s image API in the companion generator.
transcript = client.audio.transcriptions.create(
model="whisper-1",
file=buffer,
response_format="text",
)Use this only with content you are allowed to send to OpenAI, and review OpenAI retention/privacy settings for your account.
Dependency behavior could change over time, even though these packages are consistent with the skill’s purpose.
The Python dependencies are specified with lower bounds rather than exact pinned versions, so future installs may resolve to newer package versions.
openai>=1.0.0 pydub>=0.25.1 requests>=2.31.0 python-dotenv>=1.0.0
For production use, pin exact versions or use a reviewed lockfile.
