Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

a2e.ai Full Platform

v2.1.0

a2e.ai full API: Image Gen (Text2Image, NanoBanana, GPT Image, Flux 2), Video Gen (Image2Video with LoRA/FLF2V support, Video2Video, Kling 3.0, Wan 2.6, Sora...

0· 84·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for danielgrobelny/a2e-image.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "a2e.ai Full Platform" (danielgrobelny/a2e-image) from ClawHub.
Skill page: https://clawhub.ai/danielgrobelny/a2e-image
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: A2E_KEY
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install a2e-image

ClawHub CLI

Package manager switcher

npx clawhub@latest install a2e-image
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name, description, SKILL.md, reference docs, and script all align with an a2e.ai image/video/voice API helper; requesting A2E_KEY is appropriate. Minor mismatch: the provided CLI script uses common binaries (curl, jq, date) but the skill's metadata declares no required binaries. This is an incoherence (not necessarily malicious) that could cause runtime failures.
!
Instruction Scope
SKILL.md explicitly instructs the agent to run `source ~/.openclaw/workspace/.env` to load A2E_KEY. Sourcing the entire .env file can expose any other environment variables or secrets present there (not just the declared A2E_KEY). The script performs many network calls (POST/GET to video.a2e.ai endpoints) which is expected for this API, but the explicit sourcing of a full env file is a scope expansion and risk for secret exposure.
Install Mechanism
There is no install spec (instruction-only), so nothing is downloaded or written during install — good. The included scripts will be executed if used. The script relies on jq and curl; the skill metadata does not declare these required binaries, creating a functional mismatch but not a direct supply-chain red flag.
!
Credentials
The only declared credential is A2E_KEY, which is proportional to the stated functionality. However, the runtime instruction to source ~/.openclaw/workspace/.env may load additional environment variables without declaring them, potentially exposing unrelated secrets. No other unrelated credentials are listed in requires.env.
Persistence & Privilege
The skill does not request always:true and uses default invocation permissions. It does not attempt to modify other skills or system-wide config. Autonomous invocation is allowed by default (disable-model-invocation=false) — this is normal for skills, and here it is not combined with additional dangerous privileges.
What to consider before installing
This skill is coherent with its stated purpose (it wraps the a2e.ai API) and only needs an A2E API key — but take these precautions before installing/using it: - Review ~/.openclaw/workspace/.env: the SKILL.md tells the agent to source that file. If that file contains any other secrets (AWS keys, tokens, etc.), sourcing it will load them into the agent environment and could unintentionally expose them. Prefer to store only A2E_KEY in the file or change the instructions to load A2E_KEY explicitly. - Verify local tool availability: the included script uses curl and jq. Ensure those are present and from trusted sources; the skill metadata does not list them as required binaries. - Understand privacy/legal implications: features like face-swap, voice-clone, and avatar generation upload images/audio to a remote service. Make sure you have rights/consent for any media you process. - Limit the A2E_KEY scope and rotate it if possible: give the skill an API key with minimal permissions and short lifetime if the platform supports it, and rotate the key after initial testing. - Monitor network activity and API usage (coins/credits): because the agent can call the API, unexpected autonomous calls could consume credits if the key is compromised or the skill is misused. If you want the skill but worry about the .env sourcing, ask the publisher (or modify the script locally) so the agent only reads the single A2E_KEY value instead of sourcing the whole file.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎨 Clawdis
EnvA2E_KEY
Primary envA2E_KEY
latestvk97frq3te65pwkctt2ka2wyg9183yf1z
84downloads
0stars
5versions
Updated 3w ago
v2.1.0
MIT-0

a2e.ai — Full Platform Skill

Complete API access to a2e.ai. A2E-eigene Modelle sind kostenlos für max-user. Premium-Modelle (Wan 2.6, Kling 3.0, Seedance 1.5 Pro etc. — erkennbar am 🔥) kosten Coins.

Auth

source ~/.openclaw/workspace/.env  # loads A2E_KEY

If key expired → Daniel generates new one at https://video.a2e.ai → Account Settings → API Keys

Base URL: https://video.a2e.ai Auth header: Authorization: Bearer $A2E_KEY

Quick CLI — a2e.sh

{baseDir}/scripts/a2e.sh balance                            # Check coins
{baseDir}/scripts/a2e.sh generate "prompt" [WxH] [style]   # Text2Image (general|manga)
{baseDir}/scripts/a2e.sh nano "prompt" [image_url]          # NanoBanana/Gemini
{baseDir}/scripts/a2e.sh faceswap <face_url> <target_url>   # Face Swap
{baseDir}/scripts/a2e.sh headswap <head_url> <target_url>   # Head Swap
{baseDir}/scripts/a2e.sh img2vid <image_url> "prompt"       # Image to Video
{baseDir}/scripts/a2e.sh vid2vid <image_url> <video_url> "prompt"  # Video to Video
{baseDir}/scripts/a2e.sh tts "text" [voice_id]              # Text to Speech
{baseDir}/scripts/a2e.sh voices                             # List TTS voices
{baseDir}/scripts/a2e.sh voiceclone "name" <audio_url>      # Clone a voice
{baseDir}/scripts/a2e.sh avatar <anchor_id> <audio_url>     # AI Avatar video
{baseDir}/scripts/a2e.sh avatars                            # List available avatars
{baseDir}/scripts/a2e.sh talkphoto <image_url> <audio_url>  # Talking Photo
{baseDir}/scripts/a2e.sh talkvideo <video_url> <audio_url>  # Talking Video
{baseDir}/scripts/a2e.sh dub <video_url> <target_lang>      # AI Dubbing
{baseDir}/scripts/a2e.sh caption <video_url>                # Remove captions
{baseDir}/scripts/a2e.sh tryon <person> <mask> <cloth> <cloth_mask>  # Virtual Try-On
{baseDir}/scripts/a2e.sh createavatar "name" <url> [type]   # Create custom avatar (video|image)
{baseDir}/scripts/a2e.sh trainlipsync <avatar_id>           # Train Studio lip-sync on avatar
{baseDir}/scripts/a2e.sh cloneavatarvoice <avatar_id>       # Clone voice from avatar video
{baseDir}/scripts/a2e.sh myavatars                          # List custom avatars
{baseDir}/scripts/a2e.sh removeavatar <avatar_id>           # Delete custom avatar
{baseDir}/scripts/a2e.sh addface <face_image_url>           # Save face for reuse
{baseDir}/scripts/a2e.sh myfaces                            # List saved faces
{baseDir}/scripts/a2e.sh facepreview <face> <target>        # Quick face swap preview
{baseDir}/scripts/a2e.sh backgrounds                        # List avatar backgrounds
{baseDir}/scripts/a2e.sh addbg <image_url>                  # Add custom background
{baseDir}/scripts/a2e.sh upload <url>                       # Save URL to a2e storage
{baseDir}/scripts/a2e.sh presign                            # Get presigned upload URL
{baseDir}/scripts/a2e.sh languages                          # List available languages
{baseDir}/scripts/a2e.sh status <id> <engine>               # Check task status
{baseDir}/scripts/a2e.sh poll <id> <engine>                 # Poll until done

Available Engines (engine names for status/poll)

EngineCLI nameEndpoint prefix
Text2Imaget2iuserText2image
NanoBananananouserNanoBanana
Face SwapfaceswapuserFaceSwapTask
Head Swapheadswap(TBD)
Image to Videoimg2viduserImage2Video
Video to Videovid2vidmotionTransfer
Avatar Videoavatarvideo
Talking PhototalkphototalkingPhoto
Talking VideotalkvideotalkingVideo
AI DubbingdubuserDubbing
Caption RemovalcaptionuserCaptionRemoval
Virtual Try-OntryonvirtualTryOn
Voice ClonevoicecloneuserVoice

Common Async Pattern

All tasks follow the same flow:

  1. POST .../start → get _id + current_status: "initialized"
  2. Poll via GET .../{_id} or .../allRecords until completed
  3. Status: initializedprocessingcompleted | failed
  4. Result URLs (image_urls, result_url, video_url) expire after ~3 days
  5. On failure: check failed_message + failed_code

Known Quirks

  • NanoBanana GET by _id returns 404 → use allRecords to find tasks
  • Text2Image input_images does NOT work → use NanoBanana for reference images
  • German text in Text2Image often has typos → NanoBanana (Gemini) handles it correctly
  • Result URLs are signed and expire after ~3 days → download/save promptly

Additional Models (Web UI, API endpoints unconfirmed)

These are available on the platform but may not have documented API endpoints yet:

  • Kling 3.0 — Cinema-grade video (text/image/motion modes, sound gen)
  • Wan 2.6 — Cinematic video with synced audio
  • Sora 2 Pro — OpenAI video model
  • Seedance 1.5 Pro — Multi-shot video
  • Veo 3.1 — Google video generation
  • GPT Image 1.5 — OpenAI image model
  • Flux 2 Pro — Black Forest Labs, speed-optimized
  • Grok Imagine — xAI image/video
  • Nano Banana 2 — Next-gen NanoBanana
  • Photobook — Multiple portraits from one photo (relevant for AI Photoshooting DAN-102)

Full API Reference

See {baseDir}/references/api-complete.md for all endpoints with request/response details.

Comments

Loading comments...