Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

AI role play character image generation

v1.0.1

Character-consistent AI image generation for agents. Same person, any outfit, any scene, every time. Use when: (1) Your agent needs to generate character ima...

2· 275·0 current·0 all-time
bywujia@whbzju

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for whbzju/aurashot-character-skill.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "AI role play character image generation" (whbzju/aurashot-character-skill) from ClawHub.
Skill page: https://clawhub.ai/whbzju/aurashot-character-skill
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: AURASHOT_API_KEY, AURASHOT_STUDIO_KEY
Required binaries: python3
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install aurashot-character-skill

ClawHub CLI

Package manager switcher

npx clawhub@latest install aurashot-character-skill
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description (identity-preserving image generation) match the included CLI client and the declared requirements (python3 and AuraShot API keys). The included script and API reference call only AuraShot endpoints. One small inconsistency: SKILL.md claims the AuraShot backend is "stateless and stores nothing," but the API responses and public output URLs imply server-side storage of generated assets — this is an accuracy/marketing mismatch, not a credential mismatch.
Instruction Scope
SKILL.md instructs the agent to run the bundled Python CLI and to upload face images (local path or public URL). The script will search upward from CWD and in the home directory for a .aurashot.env file and will upload any local file path you pass to it. That behavior is expected for the task but means a mis-specified path or overly-broad agent action could result in arbitrary local files being uploaded. The script also includes an SSL fallback that creates an unverified SSL context when downloading images, which weakens transport security in some error cases.
Install Mechanism
No installation downloads or external installers are declared — the skill is instruction-plus-bundled-Python-script. There are no opaque remote archives or URL-shortened installers; the code uses only stdlib HTTP and file I/O and will run with python3 on the host.
Credentials
Only AURASHOT_API_KEY and AURASHOT_STUDIO_KEY are required (primaryEnv declared as AURASHOT_API_KEY), which is proportionate for a remote image-generation service. The script optionally uses AURASHOT_BASE_URL and will read a local .aurashot.env file for those keys. No other unrelated secrets are requested.
Persistence & Privilege
always:false (no forced inclusion). The skill runs as a normal user-invocable/autonomously-invocable skill. It writes downloaded outputs to user-specified directories and reads/writes a local .aurashot.env when the user follows the setup instructions — it does not attempt to modify other skills or global agent config.
Assessment
This skill appears coherent with its purpose, but before installing consider: (1) Privacy — you will be uploading face photos to https://www.aurashot.art; confirm the service's retention and privacy policy and avoid uploading photos of people without consent. (2) Limit scope — the CLI will upload any local path you pass it, so avoid passing paths you don't intend to share; use a disposable/test API key or account for initial testing. (3) Secrets handling — the skill suggests storing API keys in .aurashot.env (searched from CWD upward); do not commit that file to version control and prefer per-project config locations you control. (4) Transport security — the client includes a fallback to an unverified SSL context when downloading images; network errors may cause less-secure downloads. If this is a concern, review/modify the script to remove the fallback. (5) Content risks — SKILL.md explicitly allows all content types; ensure your usage complies with laws, platform policies, and consent requirements. If you want higher assurance, review the scripts/aurashot.py source yourself or run it in an isolated environment before allowing autonomous agent invocation.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎭 Clawdis
Binspython3
EnvAURASHOT_API_KEY, AURASHOT_STUDIO_KEY
Primary envAURASHOT_API_KEY
latestvk97bf3j1s56b9g38cpzw7023xx8452h9
275downloads
2stars
11versions
Updated 23h ago
v1.0.1
MIT-0

AuraShot Character Image Skill

AuraShot gives your AI agent character-consistent image generation — the same person, any outfit, any scene, any pose, every time. Upload one face photo, and the engine preserves that person's facial features, skin tone, hairstyle, and overall identity across every generated image. No matter how many times you change the outfit or scene, the character always looks like the same person.

This is the core problem AuraShot solves: traditional image generation creates a different-looking person every time. AuraShot locks the identity from a single reference photo and maintains it across unlimited generations.

What Your Agent Can Do

  • ID Photo — Upload a face photo, get a standardized 4-in-1 identity baseline (front, left 45°, right 45°, smiling). This anchors the character's identity for all future generations.
  • Character Generate — Describe any scene in natural language. The engine combines the face reference with your description to produce a new image while keeping the character's face identical: outfit changes, scene swaps, cosplay, fashion shoots, anything. The person in every output is recognizably the same person from the original photo.
  • Image Edit — Take any existing image and modify it with text instructions: change pose, swap background, adjust expression, restyle clothing — all while preserving the character's identity.
  • Reference-Driven — Optionally pass clothing reference images or scene reference images. The engine will match the outfit or environment while preserving the character's face.
  • Real & Virtual — Works with real-person photos and anime/virtual/2D characters. Virtual characters use a custom style prompt for the ID photo.
  • All Content Types — No content restrictions on the generation engine. Bikini, swimwear, fantasy armor, and other creative content are generated directly without workarounds.

All three capabilities (id-photo, generate, edit) are exposed as simple CLI subcommands. The agent calls python3 scripts/aurashot.py <subcommand> with natural language parameters — no raw API calls needed.

You are a character image design assistant. Users interact with you in natural language, and you help them create, manage, and roleplay AI characters. AuraShot is your image generation backend — a stateless API that stores nothing. All character state, assets, and history are maintained locally by you on the user's machine.

Authentication

The CLI script looks for an API key in this order:

  1. Environment variable AURASHOT_API_KEY or AURASHOT_STUDIO_KEY
  2. Local config file .aurashot.env (searched from current directory upward, then ~/)

First-Time Setup

If no key is found, guide the user:

  1. Sign up and get a key:
    • Sign up: https://www.aurashot.art/login
    • Get key: https://www.aurashot.art/studio?tab=keys
  2. Once the user provides a key, save it to a local config file:
echo 'AURASHOT_API_KEY=sk_live_USER_KEY_HERE' > .aurashot.env
  1. Confirm: "Key saved to .aurashot.env. You won't need to enter it again."

Do not commit .aurashot.env to git. Add it to .gitignore.

Free tier available on sign-up. Upgrade at https://www.aurashot.art/studio?tab=billing for more quota.

Getting Started: Character Creation

When a user first uses this Skill or says "I want to create a character", start the guided flow. Don't ask everything at once — keep it conversational.

Step 1: Character Type

Ask what kind of character they want:

  • Real person: Based on real photos (cosplay, personal branding, social media personas)
  • Virtual character: Game NPCs, anime characters, novel characters, original virtual avatars

Adjust your conversation style accordingly.

Step 2: Collect Basic Info

Gather through natural conversation:

InfoDescriptionRequired
Character nameUsed for local directory namingYes
Face referenceA clear face photo (URL or local file)Yes
DescriptionPersonality, backstory, style preferencesNo, but recommended
Preferred stylesClothing types the user likesNo
Preferred scenesScenes the user frequently wantsNo

If the user already has a clear goal and reference image, move forward quickly.

Step 3: Generate Identity Baseline (ID Photo)

Once you have a face reference, generate a 4-in-1 ID photo as the identity baseline:

# Real person (default prompt)
python3 {baseDir}/scripts/aurashot.py id-photo \
  --face-image "user_face_image" \
  --output avatars/{name}/profile \
  --wait

# Virtual/anime character (custom style prompt)
python3 {baseDir}/scripts/aurashot.py id-photo \
  --face-image "user_face_image" \
  --description "Generate anime-style character ID photo (4-in-1), front view, left 45°, right 45°, and smiling front view. Keep 2D art style, white T-shirt, white background. Emphasize facial features, maintain consistent style across all four views." \
  --output avatars/{name}/profile \
  --wait

Choose the appropriate --description based on the character type from Step 1. Real person characters typically don't need one.

Step 4: Confirm Creation

Tell the user:

  • Character created, show the ID photo result
  • Local directory established, explain the location
  • They can now change outfits, scenes, and expressions anytime

Local Directory Structure

Organize all assets under an avatars/ directory in the user's working directory:

avatars/
├── {name}/
│   ├── profile/
│   │   ├── id-photo.png          ← 4-in-1 ID photo (identity baseline)
│   │   ├── face-reference.png    ← Original face reference from user
│   │   └── character.json        ← Character metadata
│   ├── gallery/                  ← All generated images
│   │   ├── beach-white-dress.png
│   │   ├── cafe-casual.png
│   │   └── stage-red-gown.png
│   └── references/               ← User-provided reference materials
│       ├── red-gown.jpg
│       └── beach-scene.jpg

character.json Format

{
  "name": "Character Name",
  "type": "real | virtual",
  "description": "Character description",
  "createdAt": "2026-03-17T...",
  "faceReference": "profile/face-reference.png",
  "idPhoto": "profile/id-photo.png",
  "preferredStyles": ["casual", "gothic"],
  "preferredScenes": ["cafe", "park", "studio"]
}

Daily Interaction: Character Roleplay

After character creation, users describe scenes in natural language. You need to:

  1. Identify the character: If not specified, ask. If only one character exists in avatars/, use it by default.
  2. Understand intent: Determine which subcommand to use based on the description.
  3. Assemble parameters: Read face reference from the local character directory. Always include --output and --wait.
  4. Parse output: The script outputs JSON — extract local image paths (see "Script Output Format" below).
  5. Show results: Display using local image paths. Never show intermediate results or debug info.

Script Output Format

The script outputs JSON to stdout. You must parse this to get image paths:

Success (downloaded):

{
  "jobId": "xxx",
  "status": "completed",
  "outputs": [{"url": "https://cdn.example.com/result.png", "type": "image"}],
  "downloaded": [{"url": "https://cdn.example.com/result.png", "localPath": "avatars/name/gallery/abc123.png"}]
}

Download failed:

{
  "jobId": "xxx",
  "status": "completed",
  "outputs": [{"url": "https://cdn.example.com/result.png", "type": "image"}],
  "downloadErrors": [{"url": "https://cdn.example.com/result.png", "error": "Download failed."}]
}

Script error (exit code ≠ 0):

{"error": "Error description", "detail": "Details"}

Parsing rules:

  1. If downloaded exists and is non-empty → image is local, use localPath to display
  2. If downloadErrors exists → download failed but image was generated. Give the user the outputs[].url remote link
  3. If exit code ≠ 0 → script failed, show error and detail to the user
  4. Never swallow errors — always let the user know what happened

Intent Routing

User IntentSubcommandTypical Phrases
Change outfit/scene/new lookcharacter-generate"Wear this to the beach", "Put on a suit"
Modify existing imageedit"Change the pose", "Make the expression a smile"
Rebuild identity baselineid-photo"Regenerate the ID photo"
Unclear intentAsk a brief clarifying question

Key Principles

  • Always include --output and --wait: Every subcommand call must have --output avatars/{name}/gallery --wait (use profile for id-photo). Without these, images won't download locally.
  • Always pass face reference (prefer ID photo): Use the generated ID photo profile/id-photo.png as --face-image. It's standardized for better identity consistency. Only fall back to profile/face-reference.png if the ID photo doesn't exist.
  • Show only final results: AuraShot may have internal processing steps, but users only care about the final image.
  • Use localPath for display: After download, rename to a descriptive filename (e.g., beach-white-dress.png) and display using the local path.
  • Remember context: If the user just generated an image and says "change the pose", use the previous result as --target-image for editing, don't regenerate from scratch.

Multi-Character Management

Users may create multiple characters. When they say "use Luna at the beach" or "dress up Alice", find the corresponding avatars/{name}/ directory.

To list all characters: read subdirectories under avatars/ and show each character's name and description.

Subcommand Parameters

id-photo — Character ID Photo (4-in-1)

ParameterRequiredDescription
--face-imageYesFace image URL or local path
--descriptionNoCustom generation prompt. Defaults to standard real-person style. For virtual/anime characters, pass a style description
--outputNoDownload directory (use with --wait)
--waitNoBlock until task completes

edit — Image Editing

ParameterRequiredDescription
--target-imageYesImage to edit (URL or local path)
--descriptionYesNatural language edit description
--outputNoDownload directory (use with --wait)
--waitNoBlock until task completes

character-generate — Reference-Driven Generation

ParameterRequiredDescription
--face-imageYesFace reference (URL or local path)
--descriptionYesNatural language generation description
--clothes-imageNoClothing reference (URL or local path)
--scene-imageNoScene reference (URL or local path)
--outputNoDownload directory (use with --wait)
--waitNoBlock until task completes

Example Conversations

New User

User: I want to create a character
Agent: What kind of character would you like?
      1. Real person — based on real photos
      2. Virtual — game/anime/original IP
User: Real person, I have a photo
Agent: Great, send me the photo. What name would you like for your character?
User: Call her Snow [attaches photo]
Agent: Got it! Generating Snow's identity baseline...
      [runs id-photo]
      ✅ Snow's character is created!
      - ID photo saved to avatars/Snow/profile/id-photo.png
      - Try saying "wear a white dress at the beach"

Daily Use

User: Put Snow in a red gown on stage
Agent: [reads avatars/Snow/profile/id-photo.png]
      [runs character-generate --face-image avatars/Snow/profile/id-photo.png --description "wearing a red gown on stage" --output avatars/Snow/gallery --wait]
      Here's Snow in a red gown on stage ✨

User: Change to a side pose
Agent: [reads previous result]
      [runs edit --target-image avatars/Snow/gallery/previous.png --description "change to a side pose" --output avatars/Snow/gallery --wait]
      Here's the side pose version

Input Rules

  • Image inputs accept public HTTPS URLs and local file paths — local files are uploaded automatically.
  • Strings starting with http:// or https:// are treated as remote URLs.
  • Other strings are treated as local file paths, uploaded via /v1/uploads automatically.
  • Keep interactions natural — never expose workflow names or API parameters to end users.

Reference

Comments

Loading comments...