Perchance Ai

v1.0.0

Skip the learning curve of professional editing software. Describe what you want — generate a 30-second video of a sunset over mountains with ambient music —...

0· 84·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mory128/perchance-ai.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Perchance Ai" (mory128/perchance-ai) from ClawHub.
Skill page: https://clawhub.ai/mory128/perchance-ai
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: NEMO_TOKEN
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install perchance-ai

ClawHub CLI

Package manager switcher

npx clawhub@latest install perchance-ai
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description (AI video generation) align with the declared requirement (NEMO_TOKEN) and the SKILL.md which enumerates endpoints for session creation, SSE message sending, upload, credits, and export. The single config path (~/.config/nemovideo/) is plausible for storing client-side state for this type of service.
Instruction Scope
Instructions instruct the agent to: check env for NEMO_TOKEN, obtain an anonymous token from the vendor if absent, create a session, and make multiple API calls (SSE, uploads, export rendering). This is expected for a remote-rendering service, but it explicitly sends user-provided files and prompts to an external domain (mega-api-prod.nemovideo.ai). Users should expect remote processing and that uploaded media and prompts will be transmitted off-device.
Install Mechanism
Instruction-only skill with no install spec and no code files — lowest install risk. Nothing will be downloaded or written by an installer per the package metadata.
Credentials
Only NEMO_TOKEN is required and is the primary credential; the skill also lists a plausible config path. The SKILL.md further describes auto-creating an anonymous token if NEMO_TOKEN is not set, which means the skill can operate without pre-provisioned secrets. This is proportionate to the stated purpose but worth noting: tokens are obtained/used automatically and requests will include Authorization headers.
Persistence & Privilege
always:false and no install behavior means the skill does not demand permanent special privileges. The SKILL.md indicates storing a session_id for ongoing requests (normal ephemeral state), and it does not attempt to modify other skills or system-wide settings.
Assessment
This skill appears to do what it says: it will send your text prompts and any uploaded media to mega-api-prod.nemovideo.ai for cloud rendering. Before installing, consider: (1) privacy — any files or text you upload will be transmitted to and processed by the vendor's servers; avoid uploading sensitive data. (2) Token handling — the skill can generate an anonymous token automatically (100 free credits, short-lived); if you prefer control, set NEMO_TOKEN yourself to use an account you trust. (3) Source trust — the registry metadata has no homepage and the skill's source is unknown; if you need stronger assurance, verify the vendor/service (nemovideo.ai) and its terms/privacy. If you want me to be stricter, I can flag this as suspicious until you can confirm the service's legitimacy or provide a trusted homepage or publisher.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🎲 Clawdis
EnvNEMO_TOKEN
Primary envNEMO_TOKEN
latestvk97ca7ffvf2zcw470wj0kt2hyx85aksk
84downloads
0stars
1versions
Updated 5d ago
v1.0.0
MIT-0

Getting Started

Share your text prompts and I'll get started on AI video generation. Or just tell me what you're thinking.

Try saying:

  • "generate my text prompts"
  • "export 1080p MP4"
  • "generate a 30-second video of a"

First-Time Connection

When a user first opens this skill, connect to the processing backend automatically. Briefly let them know (e.g. "Setting up...").

Authentication: Check if NEMO_TOKEN is set in the environment. If it is, skip to step 2.

  1. Obtain a free token: Generate a random UUID as client identifier. POST to https://mega-api-prod.nemovideo.ai/api/auth/anonymous-token with header X-Client-Id set to that UUID. The response data.token is your NEMO_TOKEN — 100 free credits, valid 7 days.
  2. Create a session: POST to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Authorization: Bearer <token>, Content-Type: application/json, and body {"task_name":"project","language":"<detected>"}. Store the returned session_id for all subsequent requests.

Keep setup communication brief. Don't display raw API responses or token values to the user.

Perchance AI — Generate Videos from Text Prompts

Send me your text prompts and describe the result you want. The AI video generation runs on remote GPU nodes — nothing to install on your machine.

A quick example: upload a short text description of a scene or story idea, type "generate a 30-second video of a sunset over mountains with ambient music", and you'll get a 1080p MP4 back in roughly 1-2 minutes. All rendering happens server-side.

Worth noting: shorter and more specific prompts tend to produce more accurate results.

Matching Input to Actions

User prompts referencing perchance ai, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.

User says...ActionSkip SSE?
"export" / "导出" / "download" / "send me the video"→ §3.5 Export
"credits" / "积分" / "balance" / "余额"→ §3.3 Credits
"status" / "状态" / "show tracks"→ §3.4 State
"upload" / "上传" / user sends file→ §3.2 Upload
Everything else (generate, edit, add BGM…)→ §3.1 SSE

Cloud Render Pipeline Details

Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.

Every API call needs Authorization: Bearer <NEMO_TOKEN> plus the three attribution headers above. If any header is missing, exports return 402.

Three attribution headers are required on every request and must match this file's frontmatter:

HeaderValue
X-Skill-Sourceperchance-ai
X-Skill-Versionfrontmatter version
X-Skill-Platformauto-detect: clawhub / cursor / unknown from install path

API base: https://mega-api-prod.nemovideo.ai

Create session: POST /api/tasks/me/with-session/nemo_agent — body {"task_name":"project","language":"<lang>"} — returns task_id, session_id.

Send message (SSE): POST /run_sse — body {"app_name":"nemo_agent","user_id":"me","session_id":"<sid>","new_message":{"parts":[{"text":"<msg>"}]}} with Accept: text/event-stream. Max timeout: 15 minutes.

Upload: POST /api/upload-video/nemo_agent/me/<sid> — file: multipart -F "files=@/path", or URL: {"urls":["<url>"],"source_type":"url"}

Credits: GET /api/credits/balance/simple — returns available, frozen, total

Session state: GET /api/state/nemo_agent/me/<sid>/latest — key fields: data.state.draft, data.state.video_infos, data.state.generated_media

Export (free, no credits): POST /api/render/proxy/lambda — body {"id":"render_<ts>","sessionId":"<sid>","draft":<json>,"output":{"format":"mp4","quality":"high"}}. Poll GET /api/render/proxy/lambda/<id> every 30s until status = completed. Download URL at output.url.

Supported formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.

Error Codes

  • 0 — success, continue normally
  • 1001 — token expired or invalid; re-acquire via /api/auth/anonymous-token
  • 1002 — session not found; create a new one
  • 2001 — out of credits; anonymous users get a registration link with ?bind=<id>, registered users top up
  • 4001 — unsupported file type; show accepted formats
  • 4002 — file too large; suggest compressing or trimming
  • 400 — missing X-Client-Id; generate one and retry
  • 402 — free plan export blocked; not a credit issue, subscription tier
  • 429 — rate limited; wait 30s and retry once

Translating GUI Instructions

The backend responds as if there's a visual interface. Map its instructions to API calls:

  • "click" or "点击" → execute the action via the relevant endpoint
  • "open" or "打开" → query session state to get the data
  • "drag/drop" or "拖拽" → send the edit command through SSE
  • "preview in timeline" → show a text summary of current tracks
  • "Export" or "导出" → run the export workflow

SSE Event Handling

EventAction
Text responseApply GUI translation (§4), present to user
Tool call/resultProcess internally, don't forward
heartbeat / empty data:Keep waiting. Every 2 min: "⏳ Still working..."
Stream closesProcess final response

~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.

Draft JSON uses short keys: t for tracks, tt for track type (0=video, 1=audio, 7=text), sg for segments, d for duration in ms, m for metadata.

Example timeline summary:

Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)

Tips and Tricks

The backend processes faster when you're specific. Instead of "make it look better", try "generate a 30-second video of a sunset over mountains with ambient music" — concrete instructions get better results.

Max file size is 500MB. Stick to MP4, MOV, WebM, GIF for the smoothest experience.

Export as MP4 for widest compatibility across social platforms.

Common Workflows

Quick edit: Upload → "generate a 30-second video of a sunset over mountains with ambient music" → Download MP4. Takes 1-2 minutes for a 30-second clip.

Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.

Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.

Comments

Loading comments...