WheelSpotter

v1.0.0

A wheel-spotting scout that finds reusable solutions before you build from scratch. Cost-controlled intelligent search with complexity-aware filtering, inten...

0· 42·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for garylooop/wheelspotter.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "WheelSpotter" (garylooop/wheelspotter) from ClawHub.
Skill page: https://clawhub.ai/garylooop/wheelspotter
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install wheelspotter

ClawHub CLI

Package manager switcher

npx clawhub@latest install wheelspotter
Security Scan
Capability signals
Requires OAuth tokenRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description match the implementation: the script queries GitHub, PyPI, npm, Maven, and crates.io to find reusable libraries/tools. Requested capabilities (complexity-aware filtering, platform selection, cost caps) are consistent with search behavior. Minor inconsistency: the SKILL.md/README instructs 'pip install -r requirements.txt' and references a requirements.txt file, but no requirements.txt appears in the manifest.
Instruction Scope
SKILL.md instructs the agent to perform parallel API calls and return actionable commands (e.g., 'pip install X') — all within the declared purpose. It asks for internet access and optionally a GitHub token (optional, increases rate limits). The docs mention 'result caching', 'vector memory', and 'feedback loops' (progressive improvement) but the manifest doesn't include configuration or storage paths for persistent memory; verify whether persistence is implemented before enabling long-term storage.
Install Mechanism
There is no install spec (instruction-only skill). The included Python script uses the 'requests' library (SKILL.md lists requests and pydantic as dependencies), which is reasonable. The missing requirements.txt referenced in documentation is an implementation gap but not an installation risk by itself.
Credentials
No required environment variables or credentials are declared. SKILL.md/README mention an optional GitHub token to increase API rate limits; that is proportionate to searching GitHub. There are no demands for unrelated secrets or privileged credentials.
Persistence & Privilege
always:false (default) and autonomous invocation is allowed (normal). The skill mentions caching and vector memory but the package manifest does not show storage/config paths or a DB client. Confirm whether the skill will persist search results or feedback and where those artifacts are kept before granting long-term use.
Assessment
This skill appears coherent: it searches public package registries and GitHub and returns recommended integrations. Before installing, check these items: (1) the README/SKILL.md reference requirements.txt but none is bundled — ask the author for the dependency file or inspect search.py to ensure you can satisfy dependencies safely; (2) the skill may perform many outbound API calls (internet access required); if you supply a GitHub token, it will be used for API requests — only provide a token with appropriate, limited scopes; (3) the docs mention caching/vector memory; verify whether the skill writes persistent data (where and with what permissions) if you care about privacy; (4) review the full scripts/search.py (and any omitted/truncated code) for any unexpected external endpoints or persistence logic before enabling the skill in an agent that runs autonomously.

Like a lobster shell, security has layers — review code before you run it.

latestvk97dbnby2qbac9sphyfwyz039585pvjh
42downloads
0stars
1versions
Updated 6h ago
v1.0.0
MIT-0

WheelSpotter (v1.0)

🎯 WheelSpotter — Your wheel-spotting scout. Spots reusable solutions before you build from scratch.

Core principle: Solutions must be directly integrable—not flashy but unusable toys.


When to Use

✅ Trigger Scenarios

Load this skill when the user expresses these intents:

PatternExample
Looking for existing solutions"Is there an existing PDF parsing library?"
Avoiding duplicate work"I don't want to reinvent the wheel..."
Tech stack consultation"What's a good Python data visualization library?"
Quick integration needs"I need an OCR API I can use right away"
Pre-implementation research"Implementing JWT auth—any existing solutions?"
Wheel spotting"Spot any wheels for image processing?"

Keyword matches: is there, existing, wheel, library, framework, API, tool, solution, spot

❌ Do NOT Trigger

ScenarioReasonSuggestion
User wants to build themselves"I want to write my own..."Assist with coding directly
Highly customized requirements"I need something that does X, Y, Z all at once..."Suggest breaking down and searching separately
Learning purposes"I want to learn how to implement..."Provide tutorials instead
Tech stack already decided"I'm using React to build..."Move to development guidance

Design Principles

PrincipleDescriptionImplementation
Problem-OrientedPrecisely solve "finding integrable wheels"Sources classified by output form, exclude chatbots
Closed-Loop DeliveryClear "usable/unusable" conclusion with actionResults include pip install commands or self-build recommendation
High AdaptabilityDynamic strategy based on complexity and intentComplexity grading + intent-adaptive source selection
Progressive ImprovementSystem gets smarter with each useFeedback loops, result caching, vector memory
Transferable LeverageCore capabilities reusable elsewhereFunnel engine, cost monitor as independent modules
Cost Red LineSearch cost must be lower than self-build costBudget caps, tiered abandonment, early termination

Prerequisites

pip install -r requirements.txt

Environment:

  • Python 3.8+
  • Internet access for API calls
  • GitHub Token (optional, increases API limit to 5000 req/hour)

Input/Output Specification

Input Format

# Method 1: Natural language (parsed by agent)
user_input = "I need a Python library to process Excel files"

# Method 2: Structured input (optional)
{
    "requirement": "process Excel files",
    "tech_stack": ["Python"],
    "intent": "library",
    "constraints": {
        "license": "MIT",
        "min_stars": 100,
        "last_updated": "12m"
    }
}

Output Format

{
    "status": "found",
    "recommendations": [
        {
            "name": "openpyxl",
            "source": "pypi",
            "url": "https://openpyxl.readthedocs.io/",
            "match_score": 0.92,
            "integration_score": 0.95,
            "action": "pip install openpyxl",
            "license": "MIT",
            "stars": 1200,
            "last_updated": "2 months ago",
            "warnings": [],
            "advice": "Recommended, mature and stable"
        }
    ],
    "fallback": null,
    "cost": {
        "tokens_used": 420,
        "time_seconds": 3.2,
        "estimated_time_saved": "~4 hours"
    }
}

Status values:

  • found: Suitable solutions found
  • not_found: Recommend self-build
  • needs_clarification: Requirement unclear, need follow-up
  • error: Search failed, return error info

Core Workflow

User Input
  ↓
[M0] Complexity Grading (~30 tokens)
  ↓
[M1] Intent Classification (~60 tokens)
  ↓
[Optional] Clarification (1-2 rounds if needed)
  ↓
[M2] Extract Keywords + Tech Entities (~150 tokens)
  ↓
[Search] Activate platforms by intent, parallel API calls
  ↓
[Hard Filter] Deprecated/activity/form matching
  ↓
[LLM Refinement] Multi-dimensional eval for ≤5 candidates (~300 tokens)
  ↓
Output recommendations + action commands + cost report

Implementation Details

Step 1: Complexity Grading (M0)

Prompt Template:

You are a development complexity assessment expert. Evaluate the requirement:
- L1: Simple function/tool, solvable with dozens of lines
- L2: Medium module, requires interface design
- L3: Complex system, involves multiple components

Requirement: {requirement}
Output JSON only: { "complexity": "L2", "reason": "..." }

Impact on Search Strategy:

ComplexityToken CapTime CapSourcesStar Threshold
L1 Simple3008s2-3≥10
L2 Medium60012s3-5≥50
L3 Complex80015sFull≥100

Step 2: Intent Classification (M1)

Prompt Template:

Analyze the requirement, determine desired output form (multiple allowed):
- library: Library/framework integrable into code
- service: Callable external API/service
- tool: Standalone executable tool/CLI
- reference: Code template/example/architecture reference
- assistant: Conversational assistant (usually not a wheel, use cautiously)

Requirement: {requirement}
Output JSON only: { "intent": [...], "reason": "..." }

Important: If intent only contains assistant, return guidance without triggering search.

Step 3: Platform Selection Matrix

IntentActivate SourcesDo NOT Search
libraryGitHub, npm, PyPI, Maven, Crates.ioConversational skill marketplaces
serviceMCP Hubs, HuggingFace API, RapidAPIPure code repos
toolGitHub Releases, Docker Hub, npm -gPure library platforms
referenceStack Overflow, GitHub Gist, Official docsDistribution platforms

Step 4: Hard Filtering Rules

def hard_filter(candidate, complexity, intent):
    """Adaptive hard filtering"""
    
    # 1. Archived/Deprecated check
    if candidate.archived or candidate.deprecated:
        return False, "Archived or deprecated"
    
    # 2. Dynamic star threshold
    thresholds = {"L1": 10, "L2": 50, "L3": 100}
    if candidate.stars < thresholds[complexity]:
        return False, f"Insufficient stars ({candidate.stars} < {thresholds[complexity]})"
    
    # 3. Update time check
    if months_since_update > 24:
        return False, "Not updated in 24+ months"
    
    # 4. Form consistency check
    if intent == "library" and not has_package_indicator(candidate):
        return False, "Form mismatch: no library indicators"
    
    return True, "Passed"

Step 5: LLM Refinement

Multi-dimensional Scoring:

Final Score = Semantic Similarity × 0.5 
            + Integration Feasibility × 0.3 
            + Activity Normalization × 0.2

Refinement Prompt:

You are a technical solution evaluator. Assess this candidate:

Requirement: {requirement}
Candidate: {candidate}

Evaluation dimensions:
1. Semantic match (0-1): Does it truly solve the need?
2. Integration feasibility (0-1): Can user try within 1 hour?
3. Activity score (0-1): Based on stars, update frequency
4. License compatibility: Common open source license?
5. Known issues: Major bugs or security vulnerabilities?

Output JSON:
{
    "semantic_score": 0.9,
    "integration_score": 0.85,
    "activity_score": 0.7,
    "final_score": 0.83,
    "license_ok": true,
    "warnings": [],
    "advice": "Recommended, but note..."
}

Search Script

See scripts/search.py for the standalone implementation.

Usage:

# Basic usage
python scripts/search.py --query "python pdf parser" --complexity L2 --intent library

# Multiple platforms
python scripts/search.py -q "python excel read write" -c L2 -i library -p github,pypi

# With GitHub token (recommended)
python scripts/search.py -q "react charting library" -c L3 --token $GITHUB_TOKEN

Parameters:

ParameterShortDescriptionDefault
--query-qSearch keywords (required)-
--complexity-cL1/L2/L3L2
--intent-ilibrary/service/tool/referencelibrary
--platforms-pComma-separated platformsgithub
--limit-lMax results per platform20
--token-tGitHub token (optional)-
--output-oOutput file (optional)stdout

Error Handling

Error ConditionStrategyUser Message
GitHub API rate limit (403)Fallback to web search or prompt for token"GitHub API limit reached. Please retry later or configure a GitHub token."
Network timeout (>10s)Retry once, return partial results on failure"Some platforms timed out. Returning available results."
No matching intentDon't trigger search, guide user to clarify"Your requirement may need custom development. Continue searching?"
JSON parse failureLog error, return raw response"Failed to parse search results. Please check raw data."
All platforms failedReturn graceful degradation"Search service temporarily unavailable. Please retry later or research manually."

Cost Control

Three-Tier Budget System

LevelToken CapTime CapStrategy
L1 Simple3008sQuick abandonment, recommend self-build if not found
L2 Medium60012sModerate resources
L3 Complex80015sFull resources by intent matrix

Early Termination Conditions

  • Hard filter yields 0 candidates → Immediately output "not found, recommend self-build"
  • Intent is only assistant → Don't trigger search
  • High-match result found (score > 0.9) → Early termination

Graceful Degradation

⚠️ Search cost approaching or exceeding self-build cost (estimated self-build: X hours).

Found partial matches:
- [Project Name] (match score: 0.65)

Suggestion: Try the above first. If requirements not met, direct implementation may be faster.

Usage Examples

Example 1: Library Search (L2)

Input:

I need a Python library to read and write Excel files

Agent Analysis:

  • Complexity: L2 (medium module)
  • Intent: library
  • Keywords: python, excel, read, write

Script Call:

python scripts/search.py -q "python excel read write" -c L2 -i library -p github,pypi

Output:

{
  "status": "found",
  "recommendations": [
    {
      "name": "openpyxl/openpyxl",
      "action": "pip install openpyxl",
      "match_score": 0.92,
      "advice": "Recommended, comprehensive features"
    },
    {
      "name": "pandas-dev/pandas",
      "action": "pip install pandas",
      "match_score": 0.88,
      "advice": "Use pandas for data analysis needs"
    }
  ]
}

Example 2: Simple Requirement (L1 - Quick Abandonment)

Input:

I need to validate email format

Agent Judgment:

  • Complexity: L1 (~10 lines of regex)
  • Recommendation: Direct implementation is faster than searching

Output:

{
  "status": "not_found",
  "message": "This is an L1 simple requirement. Recommend direct implementation.",
  "code_snippet": "import re\nre.match(r'^[\\w.-]+@[\\w.-]+\\.\\w+$', email)"
}

Example 3: Service Discovery (L3)

Input:

I need an OCR service that can batch process PDFs via API, supporting Chinese and English

Agent Analysis:

  • Complexity: L3 (complex system)
  • Intent: service
  • Keywords: ocr, api, pdf, batch, chinese, english

Output:

{
  "status": "found",
  "recommendations": [
    {
      "name": "Tesseract OCR",
      "type": "library + CLI",
      "action": "pip install pytesseract or docker run tesseract",
      "match_score": 0.85,
      "warnings": ["Requires self-hosting"]
    },
    {
      "name": "Google Cloud Vision API",
      "type": "cloud service",
      "action": "Apply for API key then call",
      "match_score": 0.90,
      "warnings": ["Paid service"]
    }
  ]
}

Implementation Roadmap

PhaseFeaturesStatusValue
M1Complexity + Intent + Search + Hard Filter✅ CompleteCore functionality
M2Multi-turn clarification + Quick/Deep mode⏳ PlannedReduce ineffective searches
M3Result caching + Adaptive thresholds⏳ PlannedLower cost
M4Security scanning (OSV API)⏳ PlannedProduction safety
M5Vector pre-filtering (bge-small)⏳ PlannedImprove precision

Recommendation: M1 is production-ready. M2-M5 are optional enhancements.


Limitations

LimitationDescriptionMitigation
GitHub API limits60 req/hour unauthenticatedConfigure GitHub token
PyPI searchExact package names onlyCombine with GitHub search
No vector pre-filterNot implemented in current versionPlanned for M5
No vulnerability scanOSV not integratedPlanned for M4

Resource Index

ResourceLocationDescription
Search scriptscripts/search.pyStandalone multi-platform search
Dependenciesrequirements.txtPython package requirements
LicenseLICENSEMIT License

Best Practices

  1. Extract specific keywords before calling the script
  2. Classify complexity and intent accurately - determines search strategy
  3. Check license compatibility before final recommendation
  4. Provide context when requirements are ambiguous
  5. Respect early termination - L1 requirements should self-build if not found

Why WheelSpotter Works

WheelSpotter isn't a "comprehensive search engine" — it's your wheel-spotting scout:

  • 🎯 First determines if search is worthwhile - Complexity grading
  • 📍 Then determines where to search most accurately - Intent-driven platform selection
  • 💰 Gets decision evidence at lowest cost - Budget control
  • Always provides next action - Closed-loop delivery

Changelog

VersionDateChanges
1.0.02026-04-28Renamed to WheelSpotter, added triggers, error handling, standalone script, I/O spec

Comments

Loading comments...