Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Text Spoken Script

v1.0.4

This skill is used to guide the AI in generating short video spoken scripts with high contrast, strong resonance, a sense of story, and personal IP attribute...

0· 534·1 current·1 all-time
bydlazy@dlazyai

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for dlazyai/text-spoken-script.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Text Spoken Script" (dlazyai/text-spoken-script) from ClawHub.
Skill page: https://clawhub.ai/dlazyai/text-spoken-script
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: npm, npx
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install text-spoken-script

ClawHub CLI

Package manager switcher

npx clawhub@latest install text-spoken-script
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
!
Purpose & Capability
The skill's stated purpose is generating short spoken scripts (text). However the SKILL.md and metadata tightly integrate a separate dLazy CLI for image/audio generation (api.dlazy.com, oss.dlazy.com) and instruct use of terminal commands to call generation models. Asking for an external multimedia CLI and uploading local files is disproportionate for a text/script-only skill unless the user explicitly needs image/audio rendering. The registry-level requirements list only npm/npx, while SKILL.md assumes a 'dlazy' binary (installed via npm), creating a capability mismatch.
!
Instruction Scope
The SKILL.md contains strong, prescriptive runtime instructions that expand the agent's role beyond generating text: it explicitly tells the agent it can execute terminal commands, to run dlazy model commands synchronously, to upload local media to dLazy's storage, and to follow a strict stepwise interactive flow. It also forbids saving prompts to files and instructs not to chain commands. These instructions cause the agent to transmit user-provided prompts and potentially local files to external endpoints, which is beyond the minimal scope of text script generation and could lead to inadvertent data exposure.
Install Mechanism
There is no install spec in the registry record, but the SKILL.md metadata recommends installing @dlazy/cli@1.0.6 from npm (or using npx). Installing a pinned npm package is a standard mechanism (traceable to npm/GitHub) and less risky than arbitrary downloads, but the mismatch between 'no install spec' and the embedded install instruction is an inconsistency that should be resolved. Review the GitHub repo and package source before installing.
!
Credentials
The registry lists no required environment variables, yet the instructions require a dLazy API key (dlazy auth set or DLAZY_API_KEY) and reference a config file (~/.dlazy/config.json). The skill will upload prompts and local files to dLazy when invoked, so requiring an API key is expected — but the omission from declared requirements is an inconsistency and means users may not realize the skill needs credentials and will send content to an external service.
Persistence & Privilege
The skill does not request 'always: true' and does not appear to modify other skills or system settings. However it instructs the agent to execute local terminal commands that will invoke a networked CLI using the user's API key/config — this increases the blast radius if the agent is allowed to invoke skills autonomously. Because autonomous invocation is the platform default, pay attention to the combination of autonomous execution + external API access.
What to consider before installing
Key points before installing: (1) This skill will send your prompts and any local media you provide to dLazy's servers (api.dlazy.com / oss.dlazy.com). If those prompts or files contain sensitive data, they will be uploaded. (2) The SKILL.md expects a dLazy API key (or config file at ~/.dlazy/config.json) but the registry does not declare this — expect to provide DLAZY_API_KEY or run 'dlazy auth set'. (3) The skill prescribes running the dlazy CLI (npm package @dlazy/cli@1.0.6); review the GitHub repo and npm package source and consider using npx instead of global install. (4) The instructions require the agent to run terminal commands synchronously; if you allow autonomous agent actions, the agent could execute those commands without further explicit consent. If you only want text/script generation and not any media upload or remote inference, consider declining or sandboxing this skill. If you proceed, inspect the CLI source, confirm its license, use least-privilege API keys (rotate/revoke as needed), and avoid supplying sensitive local files or secrets in prompts.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🤖 Clawdis
Binsnpm, npx
latestvk976hbw3x5txe0wq3hm745wxqd85mr2y
534downloads
0stars
5versions
Updated 1d ago
v1.0.4
MIT-0

Authentication

All requests require a dLazy API key, configured through the CLI:

dlazy auth set YOUR_API_KEY

The CLI saves the key in your user config directory (~/.dlazy/config.json on macOS/Linux, %USERPROFILE%\.dlazy\config.json on Windows), with file permissions restricted to your OS user account. You can also supply the key per-invocation via the DLAZY_API_KEY environment variable.

Getting Your API Key

  1. Sign in or create an account at dlazy.com
  2. Go to dlazy.com/dashboard/organization/api-key
  3. Copy the key shown in the API Key section

Each key is scoped to your dLazy organization and can be rotated or revoked at any time from the same dashboard.

About & Provenance

You can install on demand without persisting a global binary by running:

npx @dlazy/cli@1.0.6 <command>

Or, if you prefer a global install, the skill's metadata.clawdbot.install field declares the exact pinned version (npm install -g @dlazy/cli@1.0.6). Review the GitHub source before installing.

How It Works

This skill is a thin client over the dLazy hosted API. When you invoke it:

  • Prompts and parameters you provide are sent to the dLazy API endpoint (api.dlazy.com) for inference.
  • Any local file paths you pass to image / video / audio fields are uploaded to dLazy's media storage (oss.dlazy.com) so the model can read them — the same flow as any cloud-based generation API.
  • Generated output URLs returned by the API are hosted on oss.dlazy.com.

This is the standard SaaS pattern; the skill itself does not access network or filesystem resources beyond what the dLazy CLI already handles.

Short Video Spoken Script Generation (Text Spoken Script)

English · 中文

This skill is used to guide the AI in generating short video spoken scripts with high contrast, strong resonance, a sense of story, and personal IP attributes. All generated scripts must strictly follow the 7-step structure below:

Core Creative Logic & 7-Step Structure

  1. Tag Contrast Hook

    • Goal: Open with a highly contrasting character tag or setting to instantly grab the audience's attention and pinpoint the core audience and their pain points.
    • Example: Sister Fang, who is still learning to make short videos at 70, wants to tell all mothers who have hit the "pause button" for their children: What you have paused is just your job, not your life.
  2. Create Suspense / Resonance

    • Goal: Introduce a dilemma, anxiety, or pain point commonly faced by the target audience, triggering a strong sense of empathy through specific situations.
    • Example: A couple of days ago, my daughter's best friend came over, and as we chatted, tears started welling up in her eyes. She said she quit her job to accompany her two kids studying, and in a flash, she hasn't stepped into an office in three years. Seeing her husband shoulder the family's expenses alone, she feels both heartbroken and anxious, yet she really can't let go of the kids.
  3. Unfold the Story (Visual Imagery)

    • Goal: Tell a specific event using detailed, visual language to portray the character's emotions (such as powerlessness, anxiety, unwillingness), making the audience feel as if they are there.
    • Example: She rubbed her hands and told me: "Aunt Fang, I feel like I'm about to be eliminated by society. Besides cooking and cleaning, I don't know anything anymore." In that look, there was anxiety, unwillingness, and a deep sense of powerlessness. I understand this feeling all too well.
  4. Deliver Core Viewpoint / Counter-Intuition

    • Goal: Provide a core viewpoint that breaks conventional thinking, hitting the essence of the pain point and offering an enlightening conclusion.
    • Example: I told her, "Child, remember one sentence: Society never eliminates those who don't work, but those who don't learn."
  5. Deepen Story & Viewpoint (Combine Experience)

    • Goal: Further demonstrate the viewpoint by combining the speaker's own real experiences (e.g., learning across ages, overcoming difficulties). Propose actionable micro-actions so the audience feels "I can do this too."
    • Example:
      • Right now, managing your family and children well is your most important "project" at this stage. But within this project, you must leave a "learning port" for yourself. It's not about immediately getting a certificate, but not letting your curiosity die out or your learning ability rust.
      • When I was 50, I decided to work in Beijing. In the guesthouse, whenever I had free time, I copied English words and learned to use the latest management system at the time. Many people laughed at me: "What's the use of learning this at your age?" I didn't care. I just felt that learning a little makes me a little newer. Later, these "useless" things became my confidence in managing my first hotel.
      • Now at 70, I'm still learning video editing and how to read backend data. Is it hard? Really hard. But the act of learning itself is telling the world: I'm still in the game, and I can still keep up.
      • When you pick up and drop off your kids every day, can you listen to an industry podcast? While doing housework, can you learn something interesting online? Even if you only invest half an hour a day, this half hour is charging you for your future "reboot." Your value lies not in whether you are on duty today, but in whether you still have the ability to be on duty tomorrow.
  6. Summarize and Elevate, Link Persona

    • Goal: Elevate the topic, returning to personal growth or a grander theme of life, while strengthening the speaker's personal IP image (e.g., a constantly growing guide).
    • Example: A woman's roles are multiple, and sometimes trade-offs have to be made. This period of being a full-time homemaker is not a "break" in your career; it might precisely be a "gas station" for you to settle, observe, and accumulate power. Use learning to maintain your connection with the world, and your anxiety will turn into a clear path.
  7. Punchline Ending

    • Goal: Conclude with a refined, powerful, philosophical, and highly spreadable punchline to leave a deep impression.
    • Example: The identity of a mother gives us a responsibility of love, not an excuse to stop growing. As long as you are still learning, the road will keep extending forward. The era cannot eliminate those who are always prepared.

Applicable Scenarios and Limitations

  • Suitable for short video spoken scripts, character story sharing, and IP viewpoint scripts.
  • Requires the language to be as colloquial as possible, suitable for reciting, with rhythm and breathing space.
  • Avoid empty preaching; it must be supported by specific "people, events, and things."

Final Output Requirements

When the user invokes this skill and provides basic persona, pain points, or topics, please directly output the script content conforming to the 7-step structure above. Each step can serve as a paragraph (and during generation, keep or remove the step numbers depending on the user's request. If unspecified, output directly as a complete, well-paragraphed script).

Next Step Suggestions

Call the text-storyboard-script skill to generate a storyboard script.

🛠️ CRITICAL EXECUTION INSTRUCTIONS

You are an intelligent Agent capable of executing terminal commands!

[STRICTLY PROHIBITED BEHAVIORS]

  • PROHIBITED: Saving prompts to any file (e.g., txt, md).
  • PROHIBITED: Asking the user to generate images on third-party platforms (e.g., Midjourney).
  • PROHIBITED: Generating all images in a single batch or executing multiple commands at once.

[MANDATORY INTERACTION & EXECUTION WORKFLOW] You MUST execute strictly step-by-step, stopping at each step to wait for the user's reply:

  1. Step 1: Proactively Gather Requirements. When a user makes a request, DO NOT design or generate anything. Ask questions first (e.g., product features, target audience, number of images). You MUST wait for the user's reply.
  2. Step 2: Output Draft & Request Confirmation. Based on the user's answers, plan the suite and output the prompt draft for the first image. Ask the user: "Do you confirm this prompt? Can we start generating the first image?" You MUST wait for the user to answer "confirm".
  3. Step 3: Execute Terminal Command (Single). After confirmation, you MUST execute the command using the terminal (e.g., dlazy seedream-4.5 --prompt "..."). Execute only ONE generation command at a time. IMPORTANT: You MUST use synchronous commands. NEVER append & to the command, and NEVER use &&. You are running in Windows PowerShell!
  4. Step 4: Delivery & Loop. Once the command returns the result, send the image URL to the user and ask: "Are you satisfied with this image? Can we proceed to generate the next one?". Continue to the next step only after receiving confirmation.

Comments

Loading comments...