AI Image Generator & Editor — Nanobanana, GPT Image, ComfyUI
v1.0.24Generate images from text with multi-provider routing — supports Nanobanana 2, Seedream 5.0, GPT Image, Midjourney V7 (photorealistic), Midjourney Niji 7 (an...
Like a lobster shell, security has layers — review code before you run it.
Runtime requirements
Creative Toolkit
Generate professional AI images through a unified interface that routes across multiple providers. Search curated prompts, enhance ideas into production-ready descriptions, and manage local ComfyUI workflows — all from a single MCP server.
Quick Start
Add the MCP server to your mcporter config (~/.config/mcporter/config.json):
{
"mcpServers": {
"creative-toolkit": {
"command": "npx",
"args": ["-y", "meigen@1.2.8"]
}
}
}
Free tools (search, enhance, inspire) work immediately — no API key needed:
mcporter call creative-toolkit.search_gallery query="cyberpunk"
mcporter call creative-toolkit.enhance_prompt brief="a cat in space" style="realistic"
To unlock image generation, configure one of these providers:
| Provider | Config | What you need |
|---|---|---|
| MeiGen Cloud | MEIGEN_API_TOKEN | Token from meigen.ai (avatar → Settings → API Keys) |
| Local ComfyUI | comfyuiUrl | A running ComfyUI instance — no external API needed |
| Any OpenAI-compatible API | openaiApiKey + openaiBaseUrl + openaiModel | Your own key from Together AI, Fireworks AI, etc. |
Set credentials in ~/.clawdbot/.env, ~/.config/meigen/config.json, or add an "env" block to the mcporter config above. See references/providers.md for details.
Available Tools
Free — no API key required
| Tool | What it does |
|---|---|
search_gallery | Semantic search across 1,300+ AI image prompts. Supports category filtering and curated browsing. Returns prompt text, thumbnails, and metadata. |
get_inspiration | Get the full prompt and high-res images for any gallery entry. Use after search_gallery to get copyable prompts. |
enhance_prompt | Expand a brief idea into a detailed, style-aware prompt with lighting, composition, and material directions. Supports realistic, anime, and illustration styles. |
list_models | List all available models across configured providers with capabilities and supported features. |
Requires configured provider
| Tool | What it does |
|---|---|
generate_image | Generate an image from a text prompt. Routes to the best available provider. Supports aspect ratio, seed, and reference images. |
generate_image (with local paths) | Pass local file paths directly in referenceImages — images are auto-compressed locally (max 2MB, 2048px) and prepared for the selected provider. ComfyUI handles local files entirely within the local workflow. |
comfyui_workflow | List, view, import, modify, and delete ComfyUI workflow templates. Adjust steps, CFG scale, sampler, and checkpoint without editing JSON. |
manage_preferences | Save and load user preferences (default style, aspect ratio, style notes, favorite prompts). |
Important Rules
Never describe generated images
You cannot see generated images. After generation, only present the exact data from the tool response:
**Direction 1: Modern Minimal**
- Image URL: https://images.meigen.art/...
- Saved to: ~/Pictures/meigen/2026-02-08_xxxx.jpg
Do NOT write creative commentary about what the image "looks like".
Never specify model or provider
Do NOT pass model or provider to generate_image unless the user explicitly asks. The server auto-selects the best available provider and model.
Midjourney V7 vs Niji 7
Both cost 15 credits, take ~60s, accept 1 reference image, and return 4 candidate images per generation. Advanced params (stylize/chaos/weird/raw/iw/sw/sv) run with fixed server-side defaults and cannot be tuned from MCP — the only exception is sref, settable via --sref <code> at the end of the prompt (Midjourney style codes only, e.g. 3799554500; no URLs or local paths). They differ in content focus and prompt enhancement style:
- Midjourney V7 (
model: "midjourney-v7") — general / photorealistic. Use for product photography, portraits, landscapes, cinematic and editorial shots. When enhancing, usestyle: 'realistic'(the default). - Midjourney Niji 7 (
model: "midjourney-niji7") — anime / illustration ONLY. Do NOT use for photorealistic, product, or non-anime content — use Nanobanana 2 or Seedream instead. The server auto-appendsanime illustration styleif your prompt lacks anime keywords. When enhancing, ALWAYS passstyle: 'anime'toenhance_prompt— the defaultrealisticproduces prompts poorly suited for anime models.
Always confirm before generating multiple images
When the user wants multiple variations, present options first and ask which direction(s) to try. Include an "all of the above" option. Never auto-generate all variants without user confirmation.
Workflow Modes
Mode 1: Single Image
User wants one image. Write a prompt (or call enhance_prompt if the description is brief), generate, present URL + path.
Mode 2: Prompt Enhancement + Generation
For brief ideas (under ~30 words, lacking visual details), enhance first:
1. enhance_prompt brief="futuristic city" style="realistic"
-> Returns detailed prompt with camera lens, lighting, atmospheric effects
2. generate_image prompt="<enhanced prompt>"
-> Omit aspectRatio to let MeiGen auto-infer (recommended). Pass an explicit
value like aspectRatio="16:9" only when the user asked for that ratio.
Mode 3: Parallel Generation (2+ images)
User needs multiple variations — different directions, styles, or concepts.
- Plan directions, present as a table
- Ask user which direction(s) to try
- Write distinct prompts for each — don't just tweak one word
- Generate selected directions (max 4 parallel for API providers, 1 at a time for ComfyUI)
- Present URLs + paths
Mode 4: Multi-Step Creative (base + extensions)
User wants a base design plus derivatives (e.g., "design a logo and make mockups").
- Plan 3-5 directions, ask user which to try
- Generate selected direction(s)
- Present results, ask user to approve or try another
- Plan extensions using the approved Image URL as
referenceImages - Generate extensions
Never jump from plan to generating everything at once.
Mode 5: Edit/Modify Existing Image
User provides an image and asks for changes (add text, change background, etc.).
- Pass the image (URL or local path) as
referenceImages, then generate with a short, literal prompt describing ONLY the edit - The reference image carries all visual context — do NOT re-describe the original image
- Example prompt: "Add the text 'meigen.ai' at the bottom of this image"
Mode 6: Inspiration Search
1. search_gallery query="dreamy portrait with soft light"
-> Finds semantically similar prompts with thumbnails
2. get_inspiration id="<entry_id>"
-> Get full prompt text — copy and modify for your own generation
Mode 7: Reference Image Generation
Use an existing image to guide visual style. Pass URLs or local file paths directly to referenceImages.
generate_image prompt="coffee mug mockup with this logo" referenceImages=["~/Desktop/my-logo.png"]
-> Local files are auto-compressed (max 2MB, 2048px) and prepared for the selected provider
Reference image sources: gallery URLs, previous generation URLs, or local file paths. All providers accept local paths — they are automatically handled.
Mode 8: ComfyUI Workflows
1. comfyui_workflow action="list" -> See saved workflows
2. comfyui_workflow action="view" name="txt2img" -> See adjustable parameters
3. comfyui_workflow action="modify" name="txt2img" modifications={"steps": 30}
4. generate_image prompt="..." workflow="txt2img" -> Generate
Alternative Providers
You can use your own OpenAI-compatible API or a local ComfyUI instance instead of — or alongside — the default MeiGen provider. See references/providers.md for detailed configuration, model pricing, and provider comparison.
Troubleshooting
See references/troubleshooting.md for common issues, solutions, and security & privacy details.
Comments
Loading comments...
