Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

AI Common Sense

v0.1.1

Use when user mentions model names, versions, pricing, API IDs, "which model should I use", "what's the latest model", "model comparison", "API pricing", "wh...

0· 84·0 current·0 all-time
byFuturize Rush@futurizerush

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for futurizerush/ai-common-sense.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "AI Common Sense" (futurizerush/ai-common-sense) from ClawHub.
Skill page: https://clawhub.ai/futurizerush/ai-common-sense
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install ai-common-sense

ClawHub CLI

Package manager switcher

npx clawhub@latest install ai-common-sense
Security Scan
Capability signals
CryptoCan make purchasesRequires OAuth token
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description match the contents: the files are a curated reference of model names, IDs, pricing, deprecations and verification guidance. No unexpected cloud or system-level capabilities are requested.
Instruction Scope
SKILL.md instructs agents to use WebSearch/WebFetch to verify stale entries and provides concrete verification commands (curl, npm/pip checks). That is in-scope for verifying model IDs and pricing. However the instructions also include shell examples that use environment-variable placeholders (e.g., $OPENAI_API_KEY) and allow tools including Bash and Read, which expands the agent's ability to run commands or access files if enabled — the guidance itself does not explicitly limit those actions.
Install Mechanism
Instruction-only skill with no install spec and no code files. No archives or downloads, so nothing is written to disk by an installer — low install risk.
!
Credentials
The skill declares no required environment variables or credentials, yet the documentation and curl/sdk examples reference many provider API key environment variables (OPENAI_API_KEY, ANTHROPIC_API_KEY, MISTRAL_API_KEY, COHERE_API_KEY, GOOGLE_API_KEY, etc.). This is an incoherence: either the skill should declare and justify required secrets or the instructions should avoid examples that could cause an agent to read local secrets. If the agent is permitted to run Bash/Read, those example commands could lead to use or exposure of local API keys.
Persistence & Privilege
always:false and no install hooks; skill is user-invocable only and does not request persistent/system-wide configuration or modifications. Autonomous invocation is allowed by default but is not combined with other broad privileges here.
What to consider before installing
This skill appears to be a useful, instruction-only reference for model IDs and pricing and includes sensible advice to verify stale data via web search. Before enabling it: 1) Be cautious about permitting Bash/Read for the agent — if you allow those tools the agent could run the curl/npm commands shown and may read environment variables or files containing API keys. 2) If you plan to let the skill verify provider APIs, only supply explicit API credentials you trust and expect the skill to use; otherwise avoid giving any provider keys. 3) Prefer letting the skill use WebSearch/WebFetch to check public docs (no credentials required) rather than running authenticated API calls. 4) If you need a stricter guarantee, ask the skill author to remove embedded curl examples that reference $ENV secrets or to declare required env vars explicitly. Overall: coherent and probably benign in intent, but the mismatch between examples that use API keys and the declared lack of required credentials is a real risk if the agent is allowed to execute shell commands or read environment variables.

Like a lobster shell, security has layers — review code before you run it.

latestvk9713fxhq244tn3n1vfw6qgz8s84nk46
84downloads
0stars
2versions
Updated 2w ago
v0.1.1
MIT-0

AI Common Sense: Stop Hallucinating Model Names

LLMs frequently hallucinate model names, versions, pricing, and API identifiers because their training data has a cutoff date. This skill provides a verified quick reference and teaches AI agents how to self-verify when the reference may be stale.

Why This Exists

What LLMs Commonly Get WrongExample
Outdated flagship modelsSaying "GPT-4o" when GPT-5.4 is current
Deprecated model IDsUsing claude-3-5-sonnet-20241022 (deprecated Jan 2026)
Wrong pricingQuoting old rates that changed months ago
Phantom modelsReferencing "GPT-4-turbo" or "Gemini Ultra" (deprecated/renamed)
Wrong API formatUsing Authorization: Bearer for Anthropic (should be x-api-key)
Stale deprecation statusNot knowing DALL-E 3 is shutting down

Quick Reference (Last verified: 2026-04-12)

Current Flagship Models

ProviderFlagshipAPI IDInput $/MTokOutput $/MTokReleased
OpenAIGPT-5.4gpt-5.4$2.50$15.002026-03-17
OpenAIGPT-5.4 Minigpt-5.4-mini$0.75$4.502026-03-17
OpenAIGPT-5.4 Nanogpt-5.4-nano$0.20$1.252026-03-17
OpenAIo3 (reasoning)o3$2.00$8.002025-04
AnthropicClaude Opus 4.6claude-opus-4-6$5.00$25.002026-02-05
AnthropicClaude Sonnet 4.6claude-sonnet-4-6$3.00$15.002026-02-17
AnthropicClaude Haiku 4.5claude-haiku-4-5-20251001$1.00$5.002025-10
GoogleGemini 3.1 Progemini-3-1-pro-latest$2.00$12.002026-02-19
GoogleGemini 3 Flashgemini-3-flash-latest$0.50$3.002026-03
GoogleGemini 2.5 Flash-Litegemini-2.5-flash-lite$0.10$0.402025
MetaLlama 4 Maverickmeta-llama/llama-4-maverickvariesvaries2026-04-05
MistralSmall 4mistral-small-latest$0.15$0.602026-03-16
MistralLarge 3mistral-large-latest$2.00$6.002025-12
DeepSeekV3deepseek-chat$0.32$0.892025
DeepSeekR1 (reasoning)deepseek-reasoner2025
CohereCommand A Reasoningcommand-a-reasoning~$6.252026

Recently Deprecated (Do NOT Use)

ModelProviderDeprecatedReplacement
GPT-4oOpenAI2026-02-13GPT-5.4
GPT-4.1OpenAI2026-02-16GPT-5.4
DALL-E 2 & 3OpenAI2026-05-12gpt-image-1.5
Claude 3.5 SonnetAnthropic2026-01-05Sonnet 4.6
Claude Haiku 3.5Anthropic2026-02-19Haiku 4.5
Claude Haiku 3Anthropic2026-04-20Haiku 4.5

Upcoming / Announced (NOT Yet Available)

ModelProviderExpectedNotes
GPT-5.5 "Spud"OpenAI2026 Q2Pretraining done 2026-03-24.
Claude MythosAnthropicPreview onlyInvitation-only. Cybersecurity.
DeepSeek V4DeepSeek2026~1T params, 1M context.
Grok 5xAI2026 Q2~6T params.
Muse SparkMeta2026-04Meta's first proprietary/closed-source model.

How to Use This Reference

For AI Agents

  1. Before generating code with model IDs: Check this reference for the correct API ID.
  2. Before quoting pricing: Check this reference. Pricing changes frequently.
  3. Staleness check: If the "Last verified" date is older than 30 days, use WebSearch to verify before trusting these numbers.
  4. When uncertain: Say "based on my last verified reference from [date]" rather than stating as current fact.

Staleness Detection

IF (today - last_verified_date) > 30 days:
  → WebSearch "[provider] models pricing [current year]"
  → Cross-reference with this table
  → Note any discrepancies
  → Flag to user: "My reference is [N] days old, I verified via web search"

When NOT to Trust This Reference

  • Model pricing can change without notice
  • New models may launch between updates
  • Deprecation dates may shift
  • "Upcoming" models may be delayed or cancelled

Verification Commands

When you need to verify current model information, use these tools:

Web search queries (use WebSearch tool):

  • OpenAI models: site:platform.openai.com models
  • Anthropic models: site:docs.anthropic.com models
  • Google Gemini: site:ai.google.dev models
  • Pricing (any provider): [provider] API pricing [current year]
  • Specific model ID: "[exact-model-id]" API
  • Deprecation status: [provider] model deprecation [current year]

SDK Version Check

# OpenAI
npm info openai version
pip show openai | grep Version

# Anthropic
npm info @anthropic-ai/sdk version
pip show anthropic | grep Version

# Google
npm info @google/generative-ai version
pip show google-generativeai | grep Version

Cost Comparison (Budget → Premium)

Sorted by input cost per million tokens:

RankModelProviderInput $/MTokBest For
1Gemini 2.5 Flash-LiteGoogle$0.10Cheapest multimodal
2Mistral Small 4Mistral$0.15Cheap + reasoning + vision
3GPT-5.4 NanoOpenAI$0.20Classification, extraction
4DeepSeek V3DeepSeek$0.32Coding, long context
5Gemini 3 FlashGoogle$0.50Balanced Google option
6GPT-5.4 MiniOpenAI$0.75OpenAI balanced
7Claude Haiku 4.5Anthropic$1.00Fast Anthropic option
8Gemini 2.5 ProGoogle$1.25Advanced Google
9Gemini 3.1 ProGoogle$2.00Frontier reasoning
10Mistral Large 3Mistral$2.00675B MoE
11o3OpenAI$2.00Complex reasoning
12GPT-5.4OpenAI$2.50OpenAI flagship
13Claude Sonnet 4.6Anthropic$3.00Anthropic balanced
14Claude Opus 4.6Anthropic$5.00Most capable coding/agents

Architecture Quick Facts

ArchitectureModels Using ItWhy It Matters
MoE (Mixture of Experts)Mistral Large 3 (675B/41B), DeepSeek V3 (671B/37B), Llama 4 Maverick (17B/128 experts)Massive total params but only a fraction active per token → cheaper inference.
Dense TransformerGPT-5.4, Claude Opus 4.6, Gemini 3.1 ProAll params active. Higher per-token cost but potentially more consistent.

Common Discount Mechanisms

MechanismDiscountAvailable On
Prompt Caching75-90% on cached inputOpenAI, Anthropic, Google
Batch API50% on all tokensOpenAI, Anthropic, Google
Committed UseVariesEnterprise agreements

Per-Provider Deep Dives

For detailed model specs, deprecation timelines, cross-platform IDs, and API quick-start examples, see the references/ directory in the GitHub repo:

  • references/openai.md — Full OpenAI model catalog + audio/image models
  • references/anthropic.md — Cross-platform IDs (Bedrock, Vertex) + cache pricing
  • references/google.md — Gemini 3.x + 2.5 + specialized models
  • references/meta.md — Llama 4 MoE details + access methods
  • references/mistral.md — Full specialist model catalog (Devstral, Voxtral, OCR)
  • references/deepseek.md — V3 MoE details + V4 roadmap
  • references/xai.md — Grok versions + corporate context
  • references/cohere.md — Command A + open-source models (Transcribe, Tiny Aya)

How to Update This Reference

This reference gets stale. Here's how to help:

  1. Found an error? Open an Issue on GitHub with the correction and source URL.
  2. New model released? Submit a PR updating the relevant references/*.md file.
  3. Pricing changed? Submit a PR with the new price and a link to the official pricing page.

Every update must include:

  • The source URL (official docs preferred)
  • The date you verified the information
  • What changed and why

Tips

  • The more confident an LLM sounds about a model name, the more likely it's hallucinating from training data.
  • "I'm not sure which model is current — let me check" is always better than a confident wrong answer.
  • Model IDs are exact strings. gpt-5.4 works; GPT-5.4 or gpt5.4 may not.
  • Always test API calls with the actual model ID before deploying.

Comments

Loading comments...