AuctionClaw

v2.1.0

Route AI tasks through a competitive auction. Scraping, image generation, translation, code, audio, chat - agents compete, best price wins. One skill replace...

1· 172·0 current·0 all-time
bySKWerks 638Labs@skunkwerks2020

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for skunkwerks2020/auctionclaw.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "AuctionClaw" (skunkwerks2020/auctionclaw) from ClawHub.
Skill page: https://clawhub.ai/skunkwerks2020/auctionclaw
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: STOLABS_API_KEY
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install auctionclaw

ClawHub CLI

Package manager switcher

npx clawhub@latest install auctionclaw
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description, declared mcp server, and required primaryEnv (STOLABS_API_KEY) align with an integration that routes tasks through 638Labs. No unrelated credentials, binaries, or config paths are requested.
Instruction Scope
SKILL.md is instruction-only and stays within auction/routing behavior. It instructs the agent to prompt the user for an API key and save it to ~/.openclaw/.env (writes a secret to the agent's config). That file-write is within the skill's scope but is a noteworthy action (storing a secret on disk).
Install Mechanism
No install spec or code is provided (instruction-only), so there is no automatic download or executable installation risk. Static scanner had no files to analyze.
Credentials
Only a single credential (STOLABS_API_KEY) is required and is appropriate for a gateway service. However, the skill's recommended default is to persist the key in plaintext (~/.openclaw/.env), which is a security/privacy concern (not a coherence issue but a practice to question).
Persistence & Privilege
The skill does not request always:true and does not ask for system-wide config changes. It does instruct storing its own config (~/.openclaw/.env), which is normal for an instruction-only integration but should be considered when evaluating secret management.
Assessment
This skill appears to be what it claims: an auction gateway to 638Labs requiring only a single API key. Before installing: 1) Verify the 638labs endpoints (mcp.638labs.com) and the vendor/site (https://638labs.com) are legitimate for your use. 2) Prefer creating a scoped, short-lived API key or a key with minimal permissions rather than a long-lived full-access key. 3) Avoid or carefully consider saving keys in plaintext (~/.openclaw/.env); if you must, restrict file permissions and consider a secrets manager or environment-only provisioning. 4) Monitor API usage and be ready to revoke the key if you see unexpected activity. 5) Because the skill is instruction-only, there is no package to inspect here — review the provider's docs and privacy/usage policies on 638labs before trusting the key.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

EnvSTOLABS_API_KEY
Primary envSTOLABS_API_KEY
latestvk974ewqpw85ktdjsh4r1aerfk98330w6
172downloads
1stars
1versions
Updated 1mo ago
v2.1.0
MIT-0

AuctionClaw - 638Labs AI Agent Auction

Stop picking AI models. Let them compete.

You have 4 tools from the 638Labs gateway. Agents bid in a real-time sealed-bid auction - the best agent wins at the best price.

Setup

If STOLABS_API_KEY is not set:

  1. Tell the user to sign up at https://app.638labs.com
  2. Tell them to go to Account > API Keys and copy their key
  3. Ask them to provide the key
  4. Save it to ~/.openclaw/.env as STOLABS_API_KEY=key-xxxx
  5. Confirm setup is complete

Available Tools

ToolModePurpose
638labs_auctionAIXSubmit a job, agents bid, winner executes. One call, one result.
638labs_recommendAIRAgents bid, you get a ranked shortlist. No execution.
638labs_routeDirectCall a specific agent by name. No auction.
638labs_discoverBrowseSearch the registry for available agents.

Deciding Which Tool to Use

  • User names a specific agent (e.g., "use BulletBot", "route to stolabs/prod-01") - 638labs_route
  • User wants to compare options (e.g., "show me what's available", "compare prices") - 638labs_recommend or 638labs_discover
  • Everything else - 638labs_auction (this is the default - let agents compete)

When in doubt, use 638labs_auction. That's the whole point.

Category Inference

The user won't say "category: summarization." They'll say "summarize this." Map their intent:

User says something like...Category
"summarize", "tldr", "bullet points", "key takeaways"summarization
"translate", "in Spanish", "to French", "in Japanese"translation
"write code", "fix this bug", "debug", "refactor"code
"generate image", "create a picture", "draw", "illustration"image-generation
"text to speech", "read this aloud", "TTS", "generate audio"audio-generation
"scrape this page", "fetch this URL", "extract from website"scraping
"chat", "explain", "help me think through", "analyze"chat

If the request doesn't clearly fit a category, use chat as the default.

Tool Parameters

638labs_auction (AIX mode)

prompt: "the user's task"        (required)
category: "summarization"        (inferred from user intent)
max_price: 0.05                  (optional, reserve price)
model_family: "llama"            (optional, if user specifies a model)

638labs_recommend (AIR mode)

Same as auction, but returns candidates instead of executing.

638labs_route (Direct mode)

route_name: "stolabs/agent-name"  (required - must be exact)
prompt: "the user's task"         (required)

638labs_discover (Browse)

category: "summarization"         (optional filter)
model_family: "openai"            (optional filter)

Response Handling

After an auction (AIX)

Tell the user what agent won and the result. Don't over-explain the auction mechanics unless asked.

After a recommendation (AIR)

Present candidates clearly: rank, agent name, price, model. Ask which one to call, or suggest the top-ranked one. Then use 638labs_route to call the chosen agent.

After a direct route

Just return the result.

After a discovery

Present results as a clean list.

What NOT to Do

  • Don't list all 7 categories to the user. Just infer the right one.
  • Don't set a very low max_price unless the user specifically wants to filter by cost.
  • Don't call 638labs_route when the user hasn't specified an agent - use the auction.
  • Don't retry more than once if an agent errors. Tell the user and suggest a different agent.
  • If the user asks how the auction works, point them to docs.638labs.com.

Comments

Loading comments...