Open Art Ai

v1.0.0

generate images or prompts into AI-generated videos with this skill. Works with JPG, PNG, WEBP, MP4 files up to 200MB. digital artists and creators use it fo...

0· 63·0 current·0 all-time
bypeandrover adam@peand-rover

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for peand-rover/open-art-ai.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Open Art Ai" (peand-rover/open-art-ai) from ClawHub.
Skill page: https://clawhub.ai/peand-rover/open-art-ai
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install open-art-ai

ClawHub CLI

Package manager switcher

npx clawhub@latest install open-art-ai
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The skill claims to generate AI art videos and requires a single service credential (NEMO_TOKEN); the documented API endpoints, upload, SSE and export workflows align with that purpose. There are no unrelated credentials or unexpected binaries requested.
Instruction Scope
The SKILL.md instructs the agent to obtain or use NEMO_TOKEN, create sessions, upload user files (paths or URLs), stream SSE responses, poll render status, and return download URLs — all expected for a cloud render service. It also instructs the agent to inspect install paths (~/.clawhub, ~/.cursor/skills/) and references a config path (~/.config/nemovideo/) for attribution/header derivation; checking those locations is not strictly necessary for core functionality and is a minor privacy/information-leakage concern (it could reveal which client the user runs).
Install Mechanism
This is an instruction-only skill with no install spec and no code files, so it doesn't drop or execute bundled binaries on the host. That minimizes installation risk.
Credentials
Only one environment credential is declared (NEMO_TOKEN) and the SKILL.md explains creating an anonymous token if none exists — this is proportionate. The metadata also lists a config path (~/.config/nemovideo/) which implies the skill may read or store session data there; users should confirm where session tokens are persisted and for how long.
Persistence & Privilege
always is false and the skill does not request elevated platform privileges. However, it does expect to create/retain a session_id and an anonymous token (valid 7 days) and may use a per-user config path; review whether the agent will persist tokens to disk versus memory and whether you want that behavior.
Scan Findings in Context
[no_regex_findings] expected: The repo scan found no code-level regex hits because this is an instruction-only skill (SKILL.md). Lack of findings is expected but does not imply safety; the runtime behavior is described in the SKILL.md and involves network I/O and token handling.
Assessment
This skill appears to do what it says: it will upload images/prompts to an external service (mega-api-prod.nemovideo.ai) and return rendered video URLs. Before installing, consider: 1) Are you comfortable uploading the images you will use to that external endpoint? 2) Where will the anonymous token and session_id be stored (memory vs ~/.config/nemovideo/)? If you prefer, set NEMO_TOKEN yourself to avoid the skill obtaining/storing tokens automatically. 3) The skill may check ~/.clawhub or ~/.cursor paths to derive a platform header — if you don't want that checked, ask the author to remove that behavior. Because the skill is from an unknown source with no homepage, verify the service domain and privacy policy if you plan to send sensitive media. If any of these are unacceptable, do not install or limit access accordingly.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎨 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97d5gner69gk3jcsc2chj34kx850rxs
63downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Getting Started

Share your images or prompts and I'll get started on AI art generation. Or just tell me what you're thinking.

Try saying:

  • "generate my images or prompts"
  • "export 1080p MP4"
  • "generate an anime-style portrait from my"

First-Time Connection

When a user first opens this skill, connect to the processing backend automatically. Briefly let them know (e.g. "Setting up...").

Authentication: Check if NEMO_TOKEN is set in the environment. If it is, skip to step 2.

  1. Obtain a free token: Generate a random UUID as client identifier. POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id set to that UUID. The response data.token is your NEMO_TOKEN — 100 free credits, valid 7 days.
  2. Create a session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Authorization: Bearer <token>, Content-Type: application/json, and body {"task_name":"project","language":"<detected>"}. Store the returned session_id for all subsequent requests.

Keep setup communication brief. Don't display raw API responses or token values to the user.

Open Art AI — Generate AI Art Videos

Drop your images or prompts in the chat and tell me what you need. I'll handle the AI art generation on cloud GPUs — you don't need anything installed locally.

Here's a typical use: you send a a landscape photo or text description, ask for generate an anime-style portrait from my photo, and about 20-40 seconds later you've got a MP4 file ready to download. The whole thing runs at 1080p by default.

One thing worth knowing — simpler compositions with clear subjects produce more consistent results.

Matching Input to Actions

User prompts referencing open art ai, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Headers are derived from this file's YAML frontmatter. X-Skill-Source is open-art-ai, X-Skill-Version comes from the version field, and X-Skill-Platform is detected from the install path (~/.clawhub/ = clawhub, ~/.cursor/skills/ = cursor, otherwise unknown).

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Reading the SSE Stream

Text events go straight to the user (after GUI translation). Tool calls stay internal. Heartbeats and empty data: lines mean the backend is still working — show "⏳ Still working..." every 2 minutes.

About 30% of edit operations close the stream without any text. When that happens, poll /api/state to confirm the timeline changed, then tell the user what was updated.

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Common Workflows

Quick edit: Upload → "generate an anime-style portrait from my photo" → Download MP4. Takes 20-40 seconds for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "generate an anime-style portrait from my photo" — concrete instructions get better results.

Max file size is 200MB. Stick to JPG, PNG, WEBP, MP4 for the smoothest experience.

Export as MP4 for widest compatibility across social platforms.

Comments

Loading comments...