Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

free quota text to image

v1.0.0

Generate images from text with a free-quota-first multi-provider workflow. Use this skill when a user asks for text-to-image generation that needs provider r...

0· 361·1 current·1 all-time
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The code and SKILL.md match the stated purpose (free-quota-first routing, token pooling, prompt optimization, multi-provider support). However the registry metadata lists no required environment variables or primary credential while the code and example config clearly expect provider tokens (T2I_PEINTURE_* placeholders) and an optional openai_compatible API URL. The omission of required env declarations in the registry is an inconsistency that could mislead users about secret handling needs.
!
Instruction Scope
SKILL.md documents the workflow and CLI, but the runtime instructions and code do more than simple image generation: prompt optimization and translation send user prompt text to external services (text.pollinations.ai and provider chat endpoints), the loader will read local .env files (run_text2img.load_dotenv checks multiple locations), and the tool persists exhausted-token state to a file under ~/.codex/skills/.state/. These actions are within the skill's image-generation scope but have privacy and secret-exposure implications that SKILL.md does not fully foreground.
Install Mechanism
No automatic install spec in the registry; the README/ SKILL.md instructs installing minimal Python dependencies (requests, PyYAML) from scripts/requirements.txt. There are no downloads from untrusted URLs or archives in the install path — install risk is low and expected for a Python CLI tool.
!
Credentials
Although registry metadata reports no required env vars, the example config and openclaw-integration.md expect multiple provider tokens (T2I_PEINTURE_HF_TOKENS, T2I_PEINTURE_GITEE_TOKENS, T2I_PEINTURE_MODELSCOPE_TOKENS, T2I_PEINTURE_A4F_TOKENS, T2I_PEINTURE_OPENAI_COMPATIBLE_TOKENS) and an optional API URL. The skill will load .env files from working/parent dirs and substitute ${VAR} placeholders, meaning it can read secrets from the environment or .env files despite not declaring them — this mismatch is a proportionality/privacy concern.
Persistence & Privilege
The skill persists per-provider exhausted-token state to a user-writable path (default ~/.codex/skills/.state/free-quota-image-skill/token_status.json) and will create parent directories. It does not request always:true or modify other skills. Persisting token status is functionally reasonable for token rotation, but users should be aware of on-disk state and its location.
What to consider before installing
This skill implements the advertised multi-provider image pipeline but you should be cautious before installing or running it: - Secrets: the skill expects provider tokens (HF/Gitee/ModelScope/A4F/OpenAI-compatible) via config or environment variables (T2I_PEINTURE_*), but the registry entry did not list them. Do not place production secrets in repository files — use OpenClaw env injection or a vault. - .env reading: when run, the CLI will try to load .env files from the current and parent directories. Run it from a safe folder (not your repo root containing other secrets) or remove .env files you don't want read. - Network & privacy: prompt optimization and translation send your prompt text to third-party endpoints (e.g., https://text.pollinations.ai/openai and provider chat endpoints). If prompts contain sensitive info, disable prompt optimization/translation (CLI flags --no-optimize-prompt, --no-auto-translate) or review/host the services yourself. - Persistent state: exhausted-token metadata is written to ~/.codex/skills/.state/free-quota-image-skill/token_status.json. If you share your machine, be aware this file can reveal which tokens were used/exhausted. - Review configuration: inspect assets/config.example.yaml, references/provider-endpoints.md, and references/prompt-optimization-policy.md to confirm endpoints and behaviors before using. Provide only tokens you are willing to expose to the configured endpoints and prefer injected environment configs rather than committed files. If you want higher assurance, ask the skill author to: (1) declare required env vars in registry metadata, (2) avoid automatic .env loading or make it opt-in, and (3) document exactly which external endpoints will receive prompt text and under what conditions.

Like a lobster shell, security has layers — review code before you run it.

latestvk97e921q71z480apxt5ka74ep9824p0f
361downloads
0stars
1versions
Updated 7h ago
v1.0.0
MIT-0

Free Quota Image Skill

Overview

Use this skill to run a provider-agnostic text-to-image pipeline with free-quota-first routing, token rotation, and prompt enhancement.

Workflow

  1. Load config from {baseDir}/assets/config.example.yaml or user-provided config.
  2. Resolve provider order (--provider auto follows routing.provider_order).
  3. Resolve model candidates per provider (requested -> z-image-turbo -> provider default).
  4. Prepare prompt for each attempt:
    • optionally auto-translate for target models
    • optionally optimize prompt with provider text model
  5. Execute generation request.
  6. On quota/auth failures, rotate token; if exhausted, move to next provider.
  7. Repeat the generation flow when --count > 1, and rotate provider/token start position per image to spread load.
  8. Return stable JSON output fields or direct URL output.

Commands

Install dependencies:

python -m pip install -r {baseDir}/scripts/requirements.txt

Run generation:

python {baseDir}/scripts/run_text2img.py --prompt "cinematic rainy tokyo alley" --json

Run with explicit provider/model:

python {baseDir}/scripts/run_text2img.py --prompt "a fox astronaut" --provider gitee --model flux-2 --json

Save image locally:

python {baseDir}/scripts/run_text2img.py --prompt "retro sci-fi city" --output ./out.png

Generate multiple images in one run:

python {baseDir}/scripts/run_text2img.py --prompt "anime passport portrait" --count 4 --json

CLI contract

Use {baseDir}/scripts/run_text2img.py with the fixed contract:

  • --prompt (required)
  • --provider (auto|huggingface|gitee|modelscope|a4f|openai_compatible, default auto)
  • --model (default z-image-turbo)
  • --aspect-ratio (default 1:1)
  • --seed (optional int)
  • --steps (optional int)
  • --guidance-scale (optional float)
  • --enable-hd (flag)
  • --optimize-prompt / --no-optimize-prompt (default on)
  • --auto-translate / --no-auto-translate (default off)
  • --config (default {baseDir}/assets/config.example.yaml)
  • --output (optional output file path)
  • --count (number of images in one run, default 1)
  • --json (structured output)

Output contract

When --json is used, output these fields on success:

  • id
  • url
  • provider
  • model
  • prompt_original
  • prompt_final
  • seed
  • steps
  • guidance_scale
  • aspect_ratio
  • fallback_chain
  • elapsed_ms

On failure, output structured error fields:

  • error_type
  • error
  • fallback_chain

When --count > 1, JSON output contains:

  • count
  • images (array of standard success payloads)
  • elapsed_ms

References

Read only what is needed:

  • Provider API wiring: references/provider-endpoints.md
  • Model coverage and fallback: references/model-matrix.md
  • Token rotation and date rules: references/token-rotation-policy.md
  • Prompt optimization pipeline: references/prompt-optimization-policy.md
  • OpenClaw setup details: references/openclaw-integration.md

Scope boundaries

Keep this skill focused on text-to-image core only.

Do not add image editing, video generation, or cloud storage workflows in this skill.

Comments

Loading comments...