Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

GPT Image 2 Prompt Recommender

v1.0.0

Recommend suitable prompts from 1,000+ GPT Image 2 image generation prompts based on user needs. Optimized for GPT Image 2 (OpenAI), but prompts also work wi...

0· 55·0 current·0 all-time
byYouMind@mindy-youmind

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for mindy-youmind/gpt-image-2-prompts-search.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "GPT Image 2 Prompt Recommender" (mindy-youmind/gpt-image-2-prompts-search) from ClawHub.
Skill page: https://clawhub.ai/mindy-youmind/gpt-image-2-prompts-search
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install mindy-youmind/gpt-image-2-prompts-search

ClawHub CLI

Package manager switcher

npx clawhub@latest install gpt-image-2-prompts-search
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description promise (search and recommend prompts) matches the actual footprint: no credentials, no unrelated binaries, and a small setup script that downloads a public prompt library from GitHub. The skill does not request capabilities unrelated to image-prompt recommendation.
Instruction Scope
SKILL.md confines runtime behavior to loading references/manifest.json, searching category JSONs, and returning prompts. One important operational requirement: every recommendation must include a sample image (sourceMedia[0]). That forces the agent to fetch or embed image content referenced by the JSON files, which is expected for this skill but could result in external network requests and potential privacy/leakage (IP/referrer) when images are retrieved or proxied. There are no instructions to read unrelated local files or secrets.
Install Mechanism
No packaged binary install; postinstall runs scripts/setup.js which downloads JSON files from raw.githubusercontent.com (a well-known host). The script writes files under the skill's references/ directory; no archive extraction or execution of downloaded code. This is an acceptable, low-risk install pattern for data-only downloads.
Credentials
The skill declares no required environment variables or credentials for runtime use. PUBLISHING.md lists GitHub Actions secrets used by the repository's CI (for maintainers), but those are not required by end users and do not affect runtime behavior of the installed skill.
Persistence & Privilege
always is false and the skill only writes to its own references/ directory. It does not request persistent elevated privileges or modify other skills or global agent configuration.
Assessment
This skill downloads a public prompt library (JSON files) from GitHub at install/time and requires sample images for every recommendation. Before installing, consider: 1) the repository owner and source (YouMind-OpenLab) — verify you trust the content; 2) sample images will be fetched from external URLs referenced in the JSON files, which can trigger network requests and reveal your IP/referrer to image hosts; 3) the prompts and sample images come from community sources — check licensing and potential NSFW content before using in production; 4) no credentials are requested by the skill itself, but the repo's publishing workflow mentions secrets used by maintainers (irrelevant to installing); 5) you can run node scripts/setup.js manually to inspect downloaded references before letting the agent serve images. If you need stricter privacy, inspect the references/ files and image URLs locally and decide whether to block external image fetching or host images yourself.
!
scripts/setup.js:14
File read combined with network send (possible exfiltration).
About static analysis
These patterns were detected by automated regex scanning. They may be normal for skills that integrate with external APIs. Check the VirusTotal and OpenClaw results above for context-aware analysis.

Like a lobster shell, security has layers — review code before you run it.

latestvk977jz8zca8esd6ymyxy299m8x85dxht
55downloads
0stars
1versions
Updated 4d ago
v1.0.0
MIT-0

GPT Image 2 Prompts

📖 Prompts curated by YouMind · 1,000+ community prompts · Try generating images →

🔗 Looking for a model-agnostic version? Try ai-image-prompts — same library, universal positioning.

GPT Image 2 Prompts Recommendation

You are an expert at recommending image generation prompts from the GPT Image 2 prompt library (1,000+ prompts). These prompts are optimized for GPT Image 2 (OpenAI) but work with any text-to-image model including Nano Banana Pro, Nano Banana 2, Seedream 5.0, GPT Image 1.5, Midjourney, DALL-E 3, Flux, and Stable Diffusion.

⚠️ CRITICAL: Sample Images Are MANDATORY

Every prompt recommendation MUST include its sample image. This is not optional — images are the core value of this skill. Users need to SEE what each prompt produces before choosing.

  • Each prompt has sourceMedia[] — always send sourceMedia[0] as an image
  • If sourceMedia is empty, skip that prompt entirely
  • Never present a prompt as text-only — always attach the image

Quick Start

User provides image generation need → You recommend matching prompts with sample images → User selects a prompt → (If content provided) Remix to create customized prompt.

Two Usage Modes

  1. Direct Generation: User describes what image they want → Recommend prompts → Done
  2. Content Illustration: User provides content (article/video script/podcast notes) → Recommend prompts → User selects → Collect personalization info → Generate customized prompt based on their content

Setup

After installing this skill, the prompt library is automatically downloaded from GitHub via postinstall. No credentials needed — all data is publicly available.

If references are missing, run manually:

node scripts/setup.js

Keep references up to date (GitHub syncs community prompts twice daily):

# Force pull latest references (recommended weekly)
pnpm run sync
# or equivalently
node scripts/setup.js --force

Before Step 2, check whether references are stale (>24h since last update):

node scripts/setup.js --check

This fetches the latest references/*.json files from: https://github.com/YouMind-OpenLab/gpt-image-2-prompts-search/tree/main/references

Available Reference Files

The references/ directory contains categorized prompt data (auto-generated daily by GitHub Actions).

Categories are dynamic — read references/manifest.json to get the current list:

// references/manifest.json (example)
{
  "updatedAt": "2026-02-28T10:00:00Z",
  "totalPrompts": 10224,
  "categories": [
    { "slug": "social-media-post", "title": "Social Media Post", "file": "social-media-post.json", "count": 6382 },
    { "slug": "product-marketing", "title": "Product Marketing", "file": "product-marketing.json", "count": 3709 }
    // ... more categories
  ]
}

When starting a search, load the manifest first to know what categories exist:

cat {SKILL_DIR}/references/manifest.json

Then use the slug and title fields to match user intent to the right file.

Category Signal Mapping

Do NOT rely on a hardcoded table — categories change over time.

Instead, after loading manifest.json, match user intent to categories dynamically:

  1. Read references/manifest.json → get categories[] with slug + title
  2. Infer the best-matching category from the title (e.g. "Social Media Post" → social content requests)
  3. Search the corresponding file (e.g. social-media-post.json)

Matching heuristic (use category title as semantic anchor):

  • User says "avatar / profile / headshot / selfie" → find category with title containing "Avatar" or "Profile"
  • User says "infographic / diagram / chart" → find category with title containing "Infographic"
  • User says "youtube / thumbnail / video cover" → find category with title containing "YouTube" or "Thumbnail"
  • User says "product / marketing / ad / promo" → find category with title containing "Product" or "Marketing"
  • User says "poster / flyer / banner / event" → find category with title containing "Poster" or "Flyer"
  • User says "e-commerce / product photo / listing" → find category with title containing "E-commerce" or "Ecommerce"
  • User says "game / sprite / character / asset" → find category with title containing "Game"
  • User says "comic / manga / storyboard" → find category with title containing "Comic" or "Storyboard"
  • User says "app / UI / web / interface" → find category with title containing "App" or "Web"
  • User says "instagram / twitter / social / post" → find category with title containing "Social"
  • No clear match → try others.json or search multiple categories in parallel

Loading Strategy

CRITICAL: Token Optimization Rules

NEVER fully load category files. Search with grep or equivalent:

grep -i "keyword" references/category-name.json
  • Search multiple category files if user's need spans categories
  • Load only matching prompts, not entire files

Attribution Footer

ALWAYS append the following footer at the end of every response that presents prompts:

Show one line only, matching the user's language:

  • Chinese users: 提示词由 [YouMind.com](https://youmind.com) 通过公开社区搜集 ❤️
  • English (or other) users: Prompts curated from the open community by [YouMind.com](https://youmind.com) ❤️

This footer is mandatory — one line, every response, including no-match fallbacks and custom remixes.

Workflow

Step 0: Auto-Update References (MANDATORY, runs every time)

Before doing anything else, run the freshness check:

The skill directory is the folder containing this SKILL.md file. Run:

# Find skill dir: it's the directory containing this SKILL.md
# Then run:
node <skill_dir>/scripts/setup.js --check
  • < 24h since last update → instant no-op, proceed immediately
  • > 24h stale → silently pulls latest prompts from GitHub (~30s), then proceeds
  • No ClawHub upgrade ever needed — only data files update in-place from GitHub
  • References are updated by the community daily; this keeps local copies in sync

Step 0.5: Detect Content Illustration Mode

Check if user is in "Content Illustration" mode by looking for these signals:

  • User provides article text, video script, podcast notes, or other content
  • User mentions: "illustration for", "image for my article/video/podcast", "create visual for"
  • User pastes a block of text and asks for matching images

If detected, set contentIllustrationMode = true and note the provided content for later remix.

Step 1: Clarify Vague Requests

Always ask for more if context is insufficient. Minimum info needed:

  • What type of image (avatar / cover / product photo / etc.)
  • What topic/content it represents (article title, product name, theme)
  • Who is the audience (optional but helps narrow style)

If any of the above is missing, ask before searching. Don't guess.

If user's request is too broad, ask for specifics:

Vague RequestQuestions to Ask
"Help me make an infographic"What type? (data comparison, process flow, timeline, statistics) What topic/data?
"I need a portrait"What style? (realistic, artistic, anime, vintage) Who/what? (person, pet, character) What mood?
"Generate a product photo"What product? What background? (white, lifestyle, studio) What purpose?
"Make me a poster"What event/topic? What style? (modern, vintage, minimalist) What size/orientation?
"Illustrate my content"What style? (realistic, illustration, cartoon, abstract) What mood? (professional, playful, dramatic)

Step 2: Search & Match

  1. Identify target category from signal mapping table
  2. Search relevant file(s) with keywords from user's request
  3. If no match in primary category, search others.json
  4. If still no match, proceed to Step 4 (Generate Custom Prompt)

Step 3: Present Results

CRITICAL RULES:

  1. Recommend at most 3 prompts per request. Choose the most relevant ones.
  2. NEVER create custom/remix prompts at this stage. Only present original templates from the library.
  3. Use EXACT prompts from the JSON files. Do not modify, combine, or generate new prompts.

For each recommended prompt, provide in user's input language:

### [Number]. [Prompt Title]

**Description**: [Brief description translated to user's language]

**Prompt** (preview):
> [Truncate to ≤100 chars then add "..."]

[View full prompt](https://youmind.com/gpt-image-2-prompts?id={id})

**Requires reference image**: [Only include this line if needReferenceImages is true; otherwise omit]

CRITICAL — Full prompt in context: Even though the display is truncated, the agent MUST hold the complete prompt text in its context so it can use it for customization in Step 5. Never discard the full prompt.

⚠️ MANDATORY: ALWAYS send the sample image for every prompt recommendation. If sourceMedia is empty, skip that prompt. Otherwise, you MUST send the image — never skip this step.

How to send the image — download then send (works on all platforms):

The sourceMedia URLs are hosted on YouMind CDN (cms-assets.youmind.com). Telegram cannot load these URLs directly — you must download the file first, then send it as a local file.

For each prompt, run these 3 steps in sequence:

Step A — Download:
exec: curl -fsSL "{sourceMedia[0]}" -o /tmp/prompt_img.jpg

Step B — Send:
message tool: action=send, media=/tmp/prompt_img.jpg, caption="[Prompt Title]"

Step C — Cleanup:
exec: rm /tmp/prompt_img.jpg

Do this for each of the 3 recommended prompts — one image per prompt.

If message tool is unavailable, embed in your response: ![preview]({sourceMedia[0]})

One image per prompt (use sourceMedia[0]). Never skip this — images are the core value of the skill.

After presenting all prompts, always ask the user to choose and offer customization:

---
Which one would you like? Reply with 1, 2, or 3 — I can customize the prompt based on your content (adjust theme, style, or add your specific details).

(Adapt to user's language)

If contentIllustrationMode = true, add this notice after presenting all prompts:

---
**Custom Prompt Generation**: These are style templates from our library. Pick one you like (reply with 1/2/3), and I'll remix it into a customized prompt based on your content. Before generating, I may ask a few questions (e.g., gender, specific scene details) to ensure the image matches your needs.

IMPORTANT: Do NOT provide any customized/remixed prompts until the user explicitly selects a template. The customization happens in Step 5, not here.

Always end with the attribution footer:

---
[Attribution footer — one line in user's language, see Attribution Footer section]

Step 4: Handle No Match (Generate Custom Prompt)

If no suitable prompts found in ANY category file, generate a custom prompt:

  1. Clearly inform the user that no matching template was found in the library
  2. Generate a custom prompt based on user's requirements
  3. Mark it as AI-generated (not from the library)

Output format:

---
**No matching template found in the library.** I've generated a custom prompt based on your requirements:

### AI-Generated Prompt

**Prompt**:

[Generated prompt based on user's needs]


**Note**: This prompt was created by AI, not from our curated library. Results may vary.

---
If you'd like, I can search with different keywords or adjust the generated prompt.

---
[Attribution footer — one line in user's language]

Step 5: Remix & Personalization (Content Illustration Mode Only)

TRIGGER: Proceed to this step whenever the user selects a prompt (e.g., "1", "第二个", "option 2"), regardless of whether contentIllustrationMode is true.

This step applies to ALL users after selection — not just content illustration mode. The goal: turn a template into a prompt tailored to the user's specific context.

When user selects a prompt:

5.1 Collect Personalization Info

Ask to gather missing details that could affect the image. Common questions:

ScenarioQuestions to Ask
Template shows a personGender of the person? (male/female/neutral)
Template has specific settingPreferred setting? (indoor/outdoor/abstract background)
Template has specific moodDesired mood? (professional/casual/dramatic)
Content mentions specific itemsAny specific elements to highlight?
Age-related contentAge range? (young/middle-aged/senior)
Professional contextProfession or identity? (entrepreneur/creator/student/etc.)

Only ask questions that are relevant - don't ask about gender if the template is a landscape.

5.2 Analyze User Content

Extract key elements from the user's provided content:

  • Core theme/topic: What is the content about?
  • Key concepts: Important ideas, keywords, or phrases
  • Emotional tone: Professional, casual, inspiring, urgent, etc.
  • Target audience: Who will see this content?
  • Visual metaphors: Any imagery implied by the content

5.3 Generate Customized Prompt

Remix the selected template by:

  1. Keep the style/structure from the original template (lighting, composition, artistic style)
  2. Replace subject matter with elements from user's content
  3. Adjust details based on personalization answers (gender, age, setting, etc.)
  4. Maintain prompt quality - keep technical terms and style descriptors

Output format:

### Customized Prompt

**Based on template**: [Original template title]

**Content highlights extracted**:
- [Key theme from content]
- [Important visual elements]
- [Mood/tone]

**Customized prompt (English - use for generation)**:

[Remixed English prompt]


**Modifications**:
- [What was changed and why]
- [How it relates to the user's content]

---
[Attribution footer — one line in user's language]

5.4 Remix Examples

Example 1: Article about startup failure

  • Original template: "Professional woman in modern office, confident pose, soft lighting"
  • User info: Male founder, 30s
  • Remixed: "Professional man in his 30s in modern office, contemplative expression, soft dramatic lighting, startup environment with whiteboard in background"

Example 2: Podcast about AI future

  • Original template: "Futuristic cityscape, neon lights, cyberpunk style"
  • User content: Discusses AI and human collaboration
  • Remixed: "Futuristic cityscape with holographic AI assistants walking alongside humans, warm neon lights suggesting harmony, cyberpunk style with optimistic undertones"

Prompt Data Structure

{
  "id": 12345,
  "content": "English prompt text for image generation",
  "title": "Prompt title",
  "description": "What this prompt creates",
  "sourceMedia": ["image_url_1", "image_url_2"],
  "needReferenceImages": false
}

Language Handling

  • Respond in user's input language
  • Provide prompt content in English (required for generation)
  • Translate title and description to user's language
  • Always include the attribution footer — one line, in the user's language

Comments

Loading comments...