Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Blueai Models

v1.1.0

Configure and manage AI models from BlueAI unified proxy service for OpenClaw. Use when: (1) adding new models to openclaw.json, (2) choosing the right model...

0· 125·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for dr-xiaoming/blueai-models.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Blueai Models" (dr-xiaoming/blueai-models) from ClawHub.
Skill page: https://clawhub.ai/dr-xiaoming/blueai-models
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install blueai-models

ClawHub CLI

Package manager switcher

npx clawhub@latest install blueai-models
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
Name/description match the included scripts and docs: the skill adds models to openclaw.json and tests connectivity against a BlueAI relay. Modifying ~/.openclaw/openclaw.json is expected for this purpose. However, some implementation choices (see below) are surprising for a 'test all configured models' operation.
!
Instruction Scope
SKILL.md instructs running the included scripts to add and test models which is consistent, but the test script's --all-configured mode does not use each provider's configured baseUrl; it uses a single base URL argument (defaulting to the BlueAI relay). That means testing 'all configured' will contact the relay for every model and will present whichever API key the script finds — not necessarily the key intended for that provider. This deviates from reasonable scope and can cause unintended key disclosure.
Install Mechanism
Instruction-only with lightweight Python helper scripts; no installer, downloads, or extracted archives. No suspicious install behavior.
!
Credentials
Metadata declares no required env vars, but test_model.py searches OPENAI_API_KEY and BLUEAI_API_KEY and add_model/test scripts read/write ~/.openclaw/openclaw.json to find/store apiKey values. The code can take a found API key and send it to the default relay — this is proportionate if the key is a BlueAI key, but dangerous if it's an unrelated provider key (e.g., a personal OpenAI key) because the script may transmit it to the relay without making that explicit.
Persistence & Privilege
add_model.py writes to the user's ~/.openclaw/openclaw.json to add providers/models (expected behavior). always:false and no global config modifications beyond the openclaw.json file. This is normal for a configuration helper, but users should be aware it will modify their config file.
What to consider before installing
This package largely does what it promises (editing ~/.openclaw/openclaw.json and calling a BlueAI relay), but there are two important gotchas you should understand before running it: 1) test_model.py will look for API keys in environment variables (OPENAI_API_KEY, BLUEAI_API_KEY) and inside your ~/.openclaw/openclaw.json and will use the first key it finds. If you run python3 scripts/test_model.py --all-configured (the undocumented risky case), the script will call a single base URL (default: https://bmc-llm-relay.bluemediagroup.cn/v1) for every configured model rather than each provider's configured endpoint — this can cause your unrelated provider keys (for example your OpenAI key) to be sent to the BlueAI relay. Do NOT run --all-configured unless you understand which key will be used. 2) The SKILL metadata declares no required env vars but the code does read env vars and the openclaw.json for apiKey values. Inspect the scripts yourself (they are small and included) before running. Prefer running targeted commands (test a single model and pass --api-key explicitly) or use a throwaway/test API key when exercising the scripts. Back up ~/.openclaw/openclaw.json before running add_model.py. If you want to proceed: review the two scripts, run test_model.py for single models with --api-key or ensure your config only contains keys intended for the BlueAI relay, and avoid --all-configured unless you update the script to use each provider's configured baseUrl and to avoid using unrelated keys.

Like a lobster shell, security has layers — review code before you run it.

latestvk970cbs0yvsxrq8zk7rdbfwrvd8437yn
125downloads
0stars
2versions
Updated 3w ago
v1.1.0
MIT-0

BlueAI Models for OpenClaw

Quick Start

Add a model to OpenClaw:

python3 scripts/add_model.py gemini-2.5-flash --alias flash
python3 scripts/add_model.py claude-sonnet-4-6 --alias sonnet
openclaw gateway restart

Test connectivity:

python3 scripts/test_model.py gemini-2.5-flash
python3 scripts/test_model.py --all-configured

List available models:

python3 scripts/add_model.py --list

Image Generation

Gemini image models generate images via Chat Completions (/v1/chat/completions), not the Images API. Send a prompt as a normal message; the model returns base64-encoded images in Markdown.

# Add image models
python3 scripts/add_model.py gemini-3.1-flash-image-preview
python3 scripts/add_model.py gemini-3-pro-image-preview
openclaw gateway restart

# Test image generation
python3 scripts/test_model.py gemini-3.1-flash-image-preview --image-gen
python3 scripts/test_model.py gemini-3-pro-image-preview --image-gen --save ./test-output
ModelSpeedQualityEdit SupportBest For
gemini-3.1-flash-image-preview⚡ FastGoodQuick prototypes, batch
gemini-3-pro-image-previewMedium⭐ BestHigh-quality creative
gemini-2.5-flash-image⚡ FastGoodImage editing

For detailed usage, prompt tips, Python examples, and edit workflows: read references/image-generation.md.

Endpoints

TypeBase URLNote
Claude (Anthropic)https://bmc-llm-relay.bluemediagroup.cnNo /v1
Everything else (OpenAI)https://bmc-llm-relay.bluemediagroup.cn/v1With /v1

Same API key works for all models.

Model Selection Quick Guide

NeedModelWhy
Cheapest + goodgemini-2.5-flash$0.15/M in, 1M context
Best ChineseDeepSeek-V3.2Top Chinese quality, cheap
Vision + cheapgpt-4o-mini or gemini-2.5-flashImage input, low cost
Strong reasoningo4-mini or DeepSeek-R1CoT reasoning
Best overallclaude-opus-4-6-v1128K output, Agent coding
Balancedclaude-sonnet-4-61/5 Opus price, most tasks
Code specialistgpt-5.2-codex128K output, code focused
Ultra-long contextxai.grok-4-fast-non-reasoning2M tokens
Image gen (fast)gemini-3.1-flash-image-previewChat-based, cheap
Image gen (quality)gemini-3-pro-image-previewBest Gemini quality
Image editgemini-2.5-flash-imageSend image + edit instruction

References

  • Full model catalog: Read references/model-catalog.md for all 100+ models with specs
  • OpenClaw config guide: Read references/openclaw-config.md for JSON structure and examples
  • Model selection decision tree: Read references/model-selection.md for task-based recommendations
  • Image generation guide: Read references/image-generation.md for Gemini image gen usage, prompts, and code examples

Key Rules

  1. Claude models use api: "anthropic-messages", baseUrl without /v1
  2. All other models use api: "openai-completions", baseUrl with /v1
  3. DeepSeek/Qwen text models: set input: ["text"] only (no image)
  4. MiniMax: must use OpenAI endpoint, does not support Claude endpoint
  5. gemini-3-pro-preview deprecated 2026-03-26 → use gemini-3.1-pro-preview
  6. Gemini image models use chat completions, not images API — output is base64 in Markdown
  7. After config changes: openclaw gateway restart

Comments

Loading comments...