Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

HappyHorse 视频创作助手

v1.1.0

使用阿里云百炼 HappyHorse 模型生成视频,支持图生视频(首帧/尾帧控制)和文生视频。

0· 40·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for cindypapa/happyhorse-video-creator.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "HappyHorse 视频创作助手" (cindypapa/happyhorse-video-creator) from ClawHub.
Skill page: https://clawhub.ai/cindypapa/happyhorse-video-creator
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: python3
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install happyhorse-video-creator

ClawHub CLI

Package manager switcher

npx clawhub@latest install happyhorse-video-creator
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill implements video generation against the DashScope / HappyHorse API and only requires python3, which matches the stated purpose. However, the SKILL.md and code include a hard-coded API key and test IPs/URLs that are not justified by the declared requirements (no credentials required).
Instruction Scope
Runtime instructions show exactly the API calls the skill will make and the expected workflow (create task, poll status, download result). This stays within the stated purpose. But instructions and examples embed a Bearer token and a test image server (43.167.197.36) and state the API key is '默认已配置(测试可用)', which implicitly encourages using an embedded/unknown credential and contacting external test hosts.
Install Mechanism
No install spec is provided (instruction-only plus a single Python module). Only python3 is required — minimal disk/write footprint and low install risk.
!
Credentials
The skill does not declare any required environment variables, yet both SKILL.md and the Python module include a DEFAULT_API_KEY hard-coded in the repository. Embedding an API key is disproportionate: it exposes a credential in source, may be a third-party/author key you should not use, and could incur costs or privacy issues. The examples and tests also reference a raw IP image host (43.167.197.36), which may be an unmanaged endpoint.
Persistence & Privilege
The skill does not request persistent/global privileges (always:false), does not modify system-wide settings, and writes outputs to a workspace under /root/.openclaw — consistent with a user-level skill.
What to consider before installing
This skill appears to do what it says (call Alibaba DashScope to generate videos), but it includes a hard-coded API key in both SKILL.md and the Python module and uses an example test image server IP. Before installing or running: - Do not rely on the embedded API key. Replace it with your own DashScope/Alibaba API key or require the skill read the key from a user-provided environment variable or prompt. - Treat the embedded key as compromised: it may be revoked, belong to someone else, or incur charges if abused. Avoid using it for production or sensitive data. - Inspect and avoid the example/test host (43.167.197.36) — it may be an unmanaged server. Prefer trusted image URLs and host your own media or use verified CDNs. - Be aware the skill will make outbound HTTP(S) requests (create tasks, poll status, download media). That can leak prompts, media URLs, and generated content to the API provider. If that is a privacy concern, do not use the skill. - If you trust the author and want to proceed: edit the code to require an environment variable (e.g., DASHSCOPE_API_KEY) instead of using DEFAULT_API_KEY, rotate any real credentials after testing, and review network egress policies and quota/cost implications for the DashScope service. If you want higher assurance, ask the author to remove embedded credentials, document the source of the test hosts, and switch examples to clearly labelled placeholders rather than live tokens/hosts.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Binspython3
latestvk974hwaa8tx4e875fjharc2a3585qfr1
40downloads
0stars
2versions
Updated 14h ago
v1.1.0
MIT-0

happyhorse-video-creator - HappyHorse 视频创作助手 v1.1

📋 技能描述

使用阿里云百炼(DashScope)HappyHorse 视频生成模型,帮助用户创作专业视频。支持图生视频(首帧/尾帧控制)和文生视频两种模式。

平台: 阿里云百炼(DashScope)
API 端点: https://dashscope.aliyuncs.com/api/v1/services/aigc/video-generation/video-synthesis

🎯 触发条件

用户提到以下关键词时触发:

  • "HappyHorse 生成视频"
  • "用 HappyHorse 做视频"
  • "阿里百炼视频"
  • "happyhorse 视频"

🔄 工作流程

阶段 0:首次配置

您好!我是 HappyHorse 视频创作助手 🎬

需要配置阿里百炼 API Key:

1️⃣ 阿里百炼 API Key
   - 获取地址:https://bailian.console.aliyun.com/
   - 默认已配置(测试可用)

阶段 1:需求收集

请告诉我:

**1. 视频主题**:想表达什么内容?

**2. 视频风格**:科技感?温馨?专业?电影感?

**3. 图片资料**:

🖼️ **首帧图片**(图生视频必须):
   - 控制视频起始画面

🖼️ **尾帧图片**(可选):
   - 控制视频结束画面
   - 首尾帧结合可精确控制过渡效果

📝 **文字描述**:
   - 具体需求说明

阶段 2:提示词确认

  1. 生成视频提示词
  2. 发送提示词给用户确认
  3. 用户确认后才生成

阶段 3:分场景生成方式确认

对每个场景,单独确认生成方式和时长:

🎬 场景 1/3:开场展示

请选择生成方式:
A) 文生视频 - 直接用文字提示词
B) 图生视频 - 提供首帧图片
C) 首尾帧 - 提供首帧+尾帧图片

请选择时长:
1️⃣ 10 秒(默认)
2️⃣ 15 秒

阶段 4:视频生成

  1. 调用阿里百炼 API
  2. 等待完成(约 1-5 分钟)
  3. 发送视频给用户确认
  4. 满意则完成,不满意则修改提示词后重新生成

🛠️ API 调用

阿里百炼 HappyHorse API

API Key: sk-d05aba5a2dae4453b97ed07fdb983e5a

图生视频(首帧模式)✅ 已验证

import requests

url = "https://dashscope.aliyuncs.com/api/v1/services/aigc/video-generation/video-synthesis"

headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer sk-d05aba5a2dae4453b97ed07fdb983e5a",
    "X-DashScope-Async": "enable"  # ⚠️ 必须设置
}

payload = {
    "model": "happyhorse-1.0-i2v",
    "input": {
        "prompt": "镜头缓缓推进,阳光洒在咖啡杯上",
        "media": [
            {"type": "first_frame", "url": "http://example.com/coffee.jpg"}
        ]
    },
    "parameters": {
        "resolution": "720P",    # 480P/720P/1080P
        "ratio": "16:9",          # 16:9/9:16/1:1
        "duration": 10            # 10 秒(默认)或 15 秒
    }
}

response = requests.post(url, headers=headers, json=payload, timeout=30)
task_id = response.json()["output"]["task_id"]

图生视频(首尾帧模式)✅ 支持

payload = {
    "model": "happyhorse-1.0-i2v",
    "input": {
        "prompt": "镜头从白天缓缓过渡到夜晚",
        "media": [
            {"type": "first_frame", "url": "http://example.com/day.jpg"},
            {"type": "last_frame", "url": "http://example.com/night.jpg"}
        ]
    },
    "parameters": {
        "resolution": "720P",
        "ratio": "16:9",
        "duration": 10
    }
}

文生视频 ✅ 已验证

payload = {
    "model": "happyhorse-1.0-t2v",
    "input": {
        "prompt": "一只可爱的小猫在草地上玩耍,阳光明媚"
    },
    "parameters": {
        "resolution": "720P",
        "ratio": "16:9",
        "duration": 10
    }
}

查询任务状态

status_url = f"https://dashscope.aliyuncs.com/api/v1/tasks/{task_id}"
headers = {"Authorization": "Bearer sk-d05aba5a2dae4453b97ed07fdb983e5a"}
response = requests.get(status_url, headers=headers, timeout=30)
result = response.json()

# task_status: PENDING → RUNNING → SUCCEEDED / FAILED
if result["output"]["task_status"] == "SUCCEEDED":
    video_url = result["output"]["video_url"]

关键参数说明

参数默认说明
modelhappyhorse-1.0-i2v模型:i2v(图生视频)或 t2v(文生视频)
input.prompt必填视频描述提示词
input.media可选媒体数组(图生视频必填)
media[].typefirst_framefirst_frame / last_frame / driving_audio / first_clip
parameters.resolution720P480P / 720P / 1080P
parameters.ratio16:916:9 / 9:16 / 1:1
parameters.duration1010 秒或 15 秒(用户可选)

⚠️ 关键注意事项

  1. 必须使用异步模式X-DashScope-Async: enable
  2. 图生视频用 input.media 数组,type 必须是 first_frame / last_frame / driving_audio / first_clip
  3. type: "image" 会报错:必须用 first_frame
  4. 图片必须是 HTTP/HTTPS URL,不支持本地路径
  5. 生成时间:约 1-5 分钟(10 秒视频约 1-3 分钟,15 秒约 3-5 分钟)
  6. 文生视频用 happyhorse-1.0-t2v,不需要 input.media
  7. 时长选择:每个场景生成前询问用户选择 10 秒或 15 秒,默认 10 秒

📁 文件管理

项目目录

/root/.openclaw/workspace/happyhorse-video-projects/
└── video_20260428_140000/
    ├── project.json
    ├── references/
    ├── videos/
    └── final_video.mp4

🚀 Python 模块调用

from happyhorse_video_creator import HappyHorseCreator

creator = HappyHorseCreator()

# 图生视频
success, video_path = creator.generate_video(
    prompt="镜头缓缓推进,阳光洒在咖啡杯上",
    image_url="http://example.com/coffee.jpg",
    duration=10  # 10 秒或 15 秒
)

# 首尾帧视频
success, video_path = creator.generate_video(
    prompt="从白天过渡到夜晚",
    image_url="http://example.com/day.jpg",
    end_frame_url="http://example.com/night.jpg",
    duration=10
)

# 文生视频
success, video_path = creator.generate_video(
    prompt="一只小猫在草地上玩耍",
    duration=10
)

✅ 测试记录

图生视频测试 (2026-04-28 14:46)

文生视频测试 (2026-04-28 14:48)

  • 模型: happyhorse-1.0-t2v
  • 输入: "一只小猫在草地上玩耍"
  • 结果: ✅ 成功 (3.4 MB, 720P, 5 秒, 16:9)
  • 耗时: 约 83 秒

🔄 更新日志

v1.1 (2026-04-28)

  • ✅ 默认时长从 15 秒改为 10 秒
  • ✅ 支持用户选择时长:10 秒或 15 秒
  • ✅ 每个场景生成前询问时长选择

v1.0 (2026-04-28)

  • ✅ 初版发布
  • ✅ 支持图生视频(首帧/尾帧模式)
  • ✅ 支持文生视频

版本: v1.1
创建时间: 2026-04-28
更新时间: 2026-04-28(v1.1:默认时长改为 10 秒,支持用户选择 10s/15s)
作者: 卡妹 🌸
平台: 阿里云百炼(DashScope)

Comments

Loading comments...