Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

DreamAPI Skill

v1.0.0

25 AI-powered tools for video generation, talking avatars, image editing, voice cloning, and more — powered by DreamAPI. Describe what you want and the agent...

0· 56·0 current·0 all-time
bydreamfaceapp@dream-api

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for dream-api/dreamapi-skill.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "DreamAPI Skill" (dream-api/dreamapi-skill) from ClawHub.
Skill page: https://clawhub.ai/dream-api/dreamapi-skill
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install dreamapi-skill

ClawHub CLI

Package manager switcher

npx clawhub@latest install dreamapi-skill
Security Scan
Capability signals
CryptoCan make purchasesRequires OAuth tokenRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill provides CLI-style Python scripts that call DreamAPI endpoints for avatars, image/video generation, and voice. Requesting a DreamAPI API key and Python runtime is appropriate for this purpose. However, the registry metadata at the top of the report lists no required environment variables or binaries, while the included SKILL.md declares primaryEnv: DREAMAPI_API_KEY and requires python3 — a clear metadata mismatch that could affect automated guards or install tooling.
Instruction Scope
SKILL.md restricts the agent to use the bundled Python scripts and outlines exact polling and user-facing reply rules (e.g., do not mention env vars or internal details). Those instructions stay inside the stated purpose (call DreamAPI and upload local files when needed). There is a UX/policy contradiction: the auth docs explain env var and credential-file options, but the user-facing reply rules forbid mentioning environment variables or other terminal/auth details — that inconsistency may cause confusing behavior or hidden prompts from the agent.
Install Mechanism
No install spec in the registry (instruction-only), but the README/SKILL.md instructs running pip install -r scripts/requirements.txt. That's expected for a Python-based client and carries normal pip risks (third-party deps). There are no downloads from untrusted URLs in the manifest; endpoints referenced point to api.newportai.com (consistent with DreamAPI).
Credentials
The only credential required by the scripts is the DreamAPI API key (DREAMAPI_API_KEY) and the scripts store credentials in ~/.dreamapi/credentials.json. That single credential is proportionate to the described functionality. No unrelated cloud credentials or unrelated secret variables are requested.
Persistence & Privilege
The skill does not request always: true and does not modify other skills. It writes its own credentials file under ~/.dreamapi which is expected for a CLI client. No unusual system-wide privileges are requested.
What to consider before installing
This skill appears to be a full Python client for DreamAPI and mostly coherent, but note these important points before installing: - Metadata mismatch: The skill package metadata claims no required env vars or binaries, but SKILL.md and the scripts require Python 3 and an API key (DREAMAPI_API_KEY). Verify the registry entry and be cautious if automated installers assume no secrets are needed. - Credential handling: The CLI will save your API key to ~/.dreamapi/credentials.json (0600). If you prefer not to save credentials to your home directory, use an isolated environment or inspect/modify auth.py first. - Data uploads: Many operations auto-upload local files via presigned URLs (images, audio, video). Any files you pass (photos, voice samples, videos) will be transferred to DreamAPI — avoid uploading sensitive personal data unless you trust the provider and understand their retention/privacy policy. - Dependency risk: The README asks you to pip install the scripts' requirements. Review scripts/requirements.txt and avoid installing into a shared system Python; use a virtualenv or container. - UX/agent rule contradiction: SKILL.md instructs the agent not to mention env vars or technical internals, but authentication instructions reference the env var option. Expect potential confusion when the agent asks for credentials; verify any prompts directly. - Recommended actions before use: review the included shared/client.py and any network-related code to confirm all requests target api.newportai.com and there are no other unexpected endpoints; run the code in an isolated environment; verify the license and source provenance (owner ID looks like an opaque ID and no homepage was provided). If you need stronger assurance, request the author/publisher identity or prefer an official integration from a known vendor.

Like a lobster shell, security has layers — review code before you run it.

latestvk972zd9ajsn61wz4fkfg8g1rzd8517s3
56downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

DreamAPI Skill

25 AI tools powered by DreamAPI — from Newport AI.

Execution Rule

Always use the Python scripts in scripts/. Do NOT use curl or direct HTTP calls.

User-Facing Reply Rules

Every user-facing reply MUST follow ALL rules below.

  1. Keep replies short — give the result or next step directly.
  2. Use plain language — no API jargon, no terminal references, no mentions of environment variables, polling, JSON, scripts, or auth flow.
  3. Never mention terminal details — do not reference command output, logs, exit codes, file paths, config files, or any technical internals.
  4. Always send the login link directly — when login is needed, provide the DreamAPI Dashboard link: https://api.newportai.com/
  5. Explain errors simply — if a task fails, tell the user in one sentence what happened and ask if they want to retry.
  6. Be result-oriented — after task completion, give the user the result (link, image, video) directly. Do not describe intermediate steps.
  7. Give time estimates — after submitting a task, tell the user the estimated wait time from the table below.

Estimated Generation Time

Task TypeEstimated Time
Avatar (LipSync / DreamAvatar / Dreamact)~2–5 min
Image Generation (Flux)~30s–1 min
Image Editing (Colorize / Enhance / etc.)~30s–1 min
Video Generation (Wan2.1)~3–5 min
Video Editing (Swap Face / Matting)~2–5 min
Video Translate~3–5 min
Voice Clone~30s–1 min
TTS (Common / Pro / Clone)~10–30s
Remove Background~10–30s

Required login message template

When authentication is needed, send the user this message (match user's language):

To get started, you need a DreamAPI API key.

1. Go to: https://api.newportai.com/
2. Sign in with Google or GitHub
3. Copy your API key from the Dashboard

Once you have your key, just tell me and I'll set it up for you.

中文模板:

开始之前,你需要一个 DreamAPI 的 API Key。

1. 打开 https://api.newportai.com/
2. 用 Google 或 GitHub 登录
3. 在 Dashboard 页面复制你的 API Key

拿到 Key 后告诉我,我帮你设置好。

Prerequisites

pip install -r {baseDir}/scripts/requirements.txt

Agent Workflow Rules

These rules apply to ALL generation modules.

  1. Always start with run — it submits the task and polls automatically until done.
  2. Do NOT ask the user to check the task status themselves. The agent polls until completion.
  3. Only use query when run has already timed out and you have a taskId to resume.
  4. If query also times out, increase --timeout and try again with the same taskId.
  5. Do not resubmit unless the task has actually failed.
Decision tree:
  → New request?           use `run`
  → run timed out?         use `query --task-id <id>`
  → query timed out?       use `query --task-id <id> --timeout 1200`
  → task status=fail?      resubmit with `run`

Task Status Codes:

CodeStatusDescription
0-2ProcessingTask is queued or running
3SuccessTask completed
4FailedTask failed

Modules

ModuleScriptReferenceDescription
Authscripts/auth.pyauth.mdAPI key management — login, status, logout
Avatarscripts/avatar.pyavatar.mdLipSync, LipSync 2.0, DreamAvatar 3.0 Fast, Dreamact
Image Genscripts/image_gen.pyimage_gen.mdFlux Text-to-Image, Flux Image-to-Image
Image Editscripts/image_edit.pyimage_edit.mdColorize, Enhance, Outpainting, Inpainting, Swap Face, Remove BG
Video Genscripts/video_gen.pyvideo_gen.mdText-to-Video, Image-to-Video, Head-Tail-to-Video (Wan2.1)
Video Editscripts/video_edit.pyvideo_edit.mdSwap Face Video, Video Matting, Composite
Video Translatescripts/video_translate.pyvideo_translate.mdVideo Translate 2.0 (en/zh/es)
Voicescripts/voice.pyvoice.mdVoice Clone, TTS Clone, TTS Common, TTS Pro, Voice List
Userscripts/user.pyuser.mdCredit balance

Read individual reference docs for usage, options, and examples. Local files (image/audio/video) are auto-uploaded when passed as arguments.

Tool Selection Guide

What does the user need?
│
├─ A talking face synced to audio?
│  ├─ Has a video + audio → avatar.py lipsync / lipsync2
│  └─ Has a photo + audio → avatar.py dreamavatar
│
├─ A character performing actions from a driving video?
│  → avatar.py dreamact
│
├─ Generate an image from text?
│  → image_gen.py text2image
│
├─ Transform an existing image?
│  → image_gen.py image2image
│
├─ Edit an image?
│  ├─ Colorize B&W photo → image_edit.py colorize
│  ├─ Enhance quality → image_edit.py enhance
│  ├─ Extend borders → image_edit.py outpainting
│  ├─ Fill/replace region → image_edit.py inpainting
│  ├─ Replace face → image_edit.py swap-face
│  └─ Remove background → image_edit.py remove-bg
│
├─ Generate a video from text?
│  → video_gen.py text2video
│
├─ Animate an image into video?
│  → video_gen.py image2video
│
├─ Create transition between two frames?
│  → video_gen.py head-tail
│
├─ Edit a video?
│  ├─ Replace face → video_edit.py swap-face
│  ├─ Remove background → video_edit.py matting
│  └─ Replace background → video_edit.py matting + composite
│
├─ Translate video speech?
│  → video_translate.py
│
├─ Text-to-speech?
│  ├─ With cloned voice → voice.py clone + tts-clone
│  ├─ Standard quality → voice.py tts-common
│  └─ Premium quality → voice.py tts-pro
│
├─ Browse available voices?
│  → voice.py list
│
├─ Check credit balance?
│  → user.py credit
│
└─ Outside capabilities?
   → Tell user this isn't supported yet

Quick Reference

User says...Script & Command
"Make a talking face video with this audio"avatar.py lipsync run
"Generate an avatar from this photo and audio"avatar.py dreamavatar run
"Make this character do the dance in this video"avatar.py dreamact run
"Generate an image of..."image_gen.py text2image run
"Modify this image to..."image_gen.py image2image run
"Colorize this old photo"image_edit.py colorize run
"Enhance this blurry image"image_edit.py enhance run
"Extend this image"image_edit.py outpainting run
"Fill in this area of the image"image_edit.py inpainting run
"Swap the face in this photo"image_edit.py swap-face run
"Remove the background"image_edit.py remove-bg run
"Generate a video about..."video_gen.py text2video run
"Animate this image into a video"video_gen.py image2video run
"Create a transition between these two images"video_gen.py head-tail run
"Swap the face in this video"video_edit.py swap-face run
"Remove the video background"video_edit.py matting run
"Replace the video background with..."video_edit.py matting run + composite run
"Translate this video to Chinese"video_translate.py run
"Clone this voice"voice.py clone run
"Read this text with the cloned voice"voice.py tts-clone run
"Convert this text to speech"voice.py tts-common run or tts-pro run
"What voices are available?"voice.py list
"How many credits do I have?"user.py credit

Agent Behavior Protocol

During Execution

  1. Local files auto-upload — scripts detect local paths and upload via DreamAPI Storage automatically
  2. Parallelize independent tasks — independent generation tasks can run concurrently via submit
  3. Keep consistency — when generating multiple related outputs, use consistent parameters

After Execution

Show the result URL first, then key metadata. Keep it clean.

Result template:

[type emoji] [task type] complete

Result: <OUTPUT_URL>
• [key metadata]

Not happy with the result? Let me know and I'll adjust.

Error Handling

See references/error_handling.md for error codes and recovery.

Capability Boundaries

CategoryToolsCount
AvatarLipSync, LipSync 2.0, DreamAvatar 3.0 Fast, Dreamact4
Image GenerationFlux Text-to-Image, Flux Image-to-Image2
Image EditingColorize, Enhance, Outpainting, Inpainting, Swap Face, Remove BG6
Video GenerationText-to-Video, Image-to-Video, Head-Tail-to-Video3
Video EditingSwap Face Video, Video Matting, Composite3
Video TranslateVideo Translate 2.01
VoiceVoice Clone, TTS Clone, TTS Common, TTS Pro, Voice List5
Total24

Never promise capabilities that don't exist as modules.

Comments

Loading comments...