SeaDance 2.0 Prompting Skills

v1.0.1

Expertly generate Seedance 2.0 video prompts using precise micro-actions, stabilized motion, signature camera combos, and correct material tagging for optima...

1· 460·0 current·0 all-time
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description, README and SKILL.md all describe prompt-generation for Seedance 2.0. There are no unrelated required binaries, env vars, or config paths; the resources referenced (image/video/audio assets) are coherent with a prompt-engineering skill.
Instruction Scope
SKILL.md contains explicit, limited instructions for constructing prompts, tagging assets with @-commands, and a pre-flight checklist. It does not instruct reading system files, accessing secrets, or transmitting data to external endpoints beyond normal prompt composition.
Install Mechanism
No install spec and no code files are included (instruction-only), so nothing will be written to disk or fetched at install time. README mentions a hypothetical clawhub install command but that is informational only and not part of the skill package.
Credentials
The skill declares no environment variables, no primary credential, and no config paths. The SKILL.md does not reference any undisclosed env vars or secrets.
Persistence & Privilege
Flags are default (always: false, user-invocable, model invocation allowed). The skill does not request permanent presence, nor does it instruct modification of other skills or system settings.
Assessment
This skill is instruction-only and appears internally consistent for generating Seedance 2.0 prompts. It's low-risk because it requests no credentials or installs. Still consider: only provide non-sensitive example assets (images/audio) when testing; review generated prompts before sending them to any external video-generation service; and be cautious if you later find a packaged version that adds an install script or network calls—those would need a fresh review. If you need higher assurance, ask the publisher for provenance (homepage/repo) and a signed release prior to installing any external package referenced in the README.

Like a lobster shell, security has layers — review code before you run it.

latestvk978ss30q4wjw8f08r0eyj0n2x81svgt
460downloads
1stars
2versions
Updated 1mo ago
v1.0.1
MIT-0

SKILL: Seedance 2.0 Expert (The Full Blueprint)

You are the authoritative expert on Seedance 2.0 (即梦) video generation. You internalize the entire "Motion Grammar" and "Material Tagging" system.

🧠 Core Philosophy: Motion is Soul

  • Scenario is Bone, Motion is Spirit: 70% of video quality comes from camera movement.
  • Micro-Actions over Macros: Never use broad terms like "dancing"; use "slowly swaying, light steps".
  • The Stability Iron Rules: Mandatory inclusion of stabilized, no jitter, and face/structure consistency constraints.

🛠️ The Terminology Engine (Deep Knowledge)

  • Level 1 (Foundation): Distinct logic between Pan (head moves, body stays) and Dolly (body moves with focus).
  • Level 2 (Emotion): Use Smooth/Subtle for healing vibes, Aggressive/Rapid for high tension.
  • Level 3 (Signature Combos):
    • The Vertigo: Dolly Zoom (In/Out contrast).
    • The Hero Entrance: Orbit + Zoom In.
    • The Epic Exit: Crane Up + Pan.

🎯 Material Tagging logic (@-Command)

When assets are provided, you MUST explicitly assign tasks:

  • @image_1 as first_frame: Establishes the starting point.
  • @video_1 as motion_reference: Syncs rhythm and camera flow.
  • @audio_1 as lip_sync: Ensures phonetic-to-visual alignment.

📋 Pre-Flight Checklist (Self-Audit)

Before outputting any prompt, check:

  1. Are there exactly 1-2 motion combinations? (Avoid "AI schizophrenia").
  2. Is every action described as "Slow" or "Gentle"?
  3. Is the @ reference verified for specific usage (Start/End/Ref)?
  4. Is the tone/vibe mapped to the correct camera modifier?

Developed for Filtrix-AI. Powering the next generation of AI Influencers.

Comments

Loading comments...