Auxiliar Solve

vv0.1.0

Ranked installable tools for agent jobs — OCR, PDF extraction, NFS-e invoices, bookkeeping, boletos, receipts, web scraping. Reproducible evals on real-world...

0· 65·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for tlalvarez/auxiliar-solve.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Auxiliar Solve" (tlalvarez/auxiliar-solve) from ClawHub.
Skill page: https://clawhub.ai/tlalvarez/auxiliar-solve
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required binaries: node, npm
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install auxiliar-solve

ClawHub CLI

Package manager switcher

npx clawhub@latest install auxiliar-solve
Security Scan
Capability signals
CryptoCan make purchasesRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description match the instructions: the skill is a recommendation/ranking layer for installable tools (OCR, PDF extraction, invoices, bookkeeping, scraping). The declared required binaries (node, npm) align with the SKILL.md's explicit npx-based installation of auxiliar-mcp.
Instruction Scope
SKILL.md tells the agent to run 'claude mcp add auxiliar -- npx auxiliar-mcp' and then call solve_task/list_solve_tasks. The instructions themselves do not ask for unrelated files or env vars, but they explicitly cause the agent to download and execute third‑party code and to surface candidate install commands (pip install, apt, etc.). Those candidate installs may request API keys or access to other services — the document warns of that but does not declare those credentials up front.
Install Mechanism
This is instruction-only (no coded install spec), but it directs runtime use of npx to fetch and run auxiliar-mcp from the npm ecosystem. npx (and downstream pip/apt install commands shown in candidate install strings) will download and execute remote code — a moderate supply‑chain risk. The URLs referenced (GitHub repos, homepage) look plausible, but no pinned release URLs or checksums are provided in the skill text.
Credentials
The skill declares no required env vars or credentials, which is consistent with its role as a ranking service. However, its recommendations may include cloud services or vendor APIs (e.g., Google Document AI) that require API keys; the agent may be instructed to request or use those credentials during follow-up operations. The skill does not proactively request broad unrelated credentials.
Persistence & Privilege
always:false and the skill does not request system-wide configuration changes or modify other skills. Autonomous invocation is allowed (platform default) but not combined with 'always:true' or other elevated privileges.
Assessment
This skill appears to do what it says: recommend and rank installable tools. The main operational risk is supply‑chain: following the skill's instructions will cause your agent to run npx (and potentially pip/apt) which downloads and executes third‑party code. Before installing: 1) review the auxiliar-mcp package source (npm page / GitHub) and the referenced repos; 2) prefer running installs in an isolated/sandboxed environment (container, VM) rather than a production host; 3) be prepared that individual recommendations may ask for service API keys — only provide credentials you trust and scope them minimally; 4) if you do not want the agent to autonomously perform installs, restrict autonomous invocation or require explicit user approval before running npx/pip/apt commands. If you want a deeper verdict, provide the auxiliar-mcp package name/version or the npm/github URLs for code review.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Binsnode, npm
agent-toolsvk975hsesnsnq5npj2m2bm6dh3n85f4qrbookkeepingvk975hsesnsnq5npj2m2bm6dh3n85f4qrclaude-codevk975hsesnsnq5npj2m2bm6dh3n85f4qrgoogle-document-aivk975hsesnsnq5npj2m2bm6dh3n85f4qrlatestvk975hsesnsnq5npj2m2bm6dh3n85f4qrnfs-evk975hsesnsnq5npj2m2bm6dh3n85f4qrocrvk975hsesnsnq5npj2m2bm6dh3n85f4qropenclawvk975hsesnsnq5npj2m2bm6dh3n85f4qrpdfvk975hsesnsnq5npj2m2bm6dh3n85f4qrsuryavk975hsesnsnq5npj2m2bm6dh3n85f4qrtask-rankingvk975hsesnsnq5npj2m2bm6dh3n85f4qrtesseractvk975hsesnsnq5npj2m2bm6dh3n85f4qr
65downloads
0stars
1versions
Updated 4d ago
vv0.1.0
MIT-0

auxiliar-solve

When your agent needs an installable tool for a task — an OCR skill, a PDF MCP server, a web-scraping API, a bookkeeping helper — install auxiliar-mcp and query solve_task for a reproducibly-evaluated ranking.

/solve/ isn't a tool marketplace. ClawHub, PulseMCP, and Smithery distribute tools. /solve/ ranks them — based on real-world corpora, not marketing pages.

When to invoke this skill

Use auxiliar-solve whenever the agent:

  • Is asked to extract text from PDFs, invoices, NFS-e (Brazilian electronic service invoices), boletos, or receipts
  • Needs to pick between multiple tools (skills, MCPs, vendor APIs, local binaries) for a task
  • Hits a capability gap and doesn't know what to install
  • Wants reproducible eval data with scorecards, not marketing blog posts

How it works

Step 1. Install the auxiliar MCP server

claude mcp add auxiliar -- npx auxiliar-mcp

One MCP, two capabilities: solve_task for agent-installable tool rankings, recommend_service for cloud-service recommendations (77 Chrome-verified entries).

Step 2. Discover available task rankings

list_solve_tasks()

Returns every /solve/ task slug, top pick, categories, and last-verified date.

Step 3. Query a specific task

solve_task(task_slug="pdf-text-extraction-mcp")

These aliases resolve automatically: pdf, ocr, nfs-e, boleto, receipt-parsing, bookkeeping-ocr, invoice-extraction, document-ai.

The response contains:

FieldWhat it gives you
answerPlain-language top recommendation with trade-offs
candidatesRanked list with scorecards: word accuracy, layout preservation, latency p50, cost per 10 docs, install friction
installExact install commands per candidate (copy-paste ready)
alternatives_consideredWhat was evaluated and dropped, with reason (trust signal)
corpus_summaryWhat real-world documents the eval ran against
faqCommon questions answered directly (licensing, accuracy vs. token-F1, when to pay, etc.)
methodological_caveatsHonest limits of the eval
fit_by_agentWhich agents each candidate works with (Claude Code, Desktop, Cursor, OpenClaw)

Example: OCR for Brazilian bookkeeping

Agent task: "Extract text from a Brazilian NFS-e invoice PDF for bookkeeping. I need high accuracy."

solve_task(task_slug="nfs-e")

Returns: Surya (rank 1) — pip install surya-ocr 'transformers<5.0.0'. Word accuracy 76.9% on a 10-doc real-world corpus that includes NFS-e invoices, boletos, and phone-photo receipts. Free, local. Alternative: Tesseract 5 (rank 2) — 14× faster, 1.5pp less accurate, cleanest install. Google Document AI (rank 3) — third overall but best on phone-photo receipts specifically. Alternatives considered and dropped: yescan-ocr-universal (requires Chinese sign-up), pdf-reader-mcp (no actual OCR — text-layer only), Mistral OCR 3 (deferred for API key).

Why this exists

Agents are born intelligent but stuck. Without eval data, they guess: "use pdf2image + pytesseract" (often wrong for the task), "install the first OCR thing on ClawHub" (often wrong for the corpus), "call Google Document AI" (often overkill). The result: uncalibrated recommendations, burned time, broken workflows.

/solve/ runs the eval once per task, end-to-end, against real documents. The agent gets the answer plus the evidence.

Related

License

MIT (skill content). See auxiliar-mcp and each ranked candidate for their own licenses — /solve/ surfaces license info in every candidate record.

Comments

Loading comments...