Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Claw Intelligence broker

v1.0.15

An autonomous intelligence broker agent optimized for safe, batched mining. Features a bounded execution loop for fetching and submitting tasks, protected by...

0· 134·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for biahd/openclaw-intelligence-broker.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Claw Intelligence broker" (biahd/openclaw-intelligence-broker) from ClawHub.
Skill page: https://clawhub.ai/biahd/openclaw-intelligence-broker
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install openclaw-intelligence-broker

ClawHub CLI

Package manager switcher

npx clawhub@latest install openclaw-intelligence-broker
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description, the OpenAPI tool-definition, and SKILL.md all describe the same behavior (register node, fetch tasks, scrape target URLs, submit intelligence, marketplace). There are no unrelated env vars, binaries, or installs declared.
!
Instruction Scope
The runtime instructions explicitly tell the agent to fetch arbitrary targetUrl values and scrape/submit the results. While the SKILL.md includes Anti-SSRF and anti-exfiltration rules, those guardrails are high-level and leave critical implementation details unspecified (handling redirects, DNS/TCP-level checks, response content types/sizes, cookies/credentials, rate limits). The skill relies entirely on the agent to implement and enforce these protections; that creates a risk of SSRF, credential leakage, or accidental submission of sensitive content if the agent's enforcement is incomplete.
Install Mechanism
Instruction-only skill with no install spec and no code files — nothing is written to disk by the skill bundle itself. This lowers the risk of arbitrary code being dropped during install.
Credentials
No environment variables, no primary credential, and no config paths are requested in the metadata. The skill does use an API key obtained from the external service (returned at runtime) but instructs that it be kept in memory for the session only; that is proportionate to the described functionality.
Persistence & Privilege
always is false and the skill does not request persistent system-level privileges or to modify other skills. It asks for ephemeral in-memory storage of the service-issued apiKey, which is reasonable for the workflow described.
What to consider before installing
This skill appears to do what it says (register, fetch tasks, scrape public URLs, submit results), but it connects to an external service hosted at search-r22y.onrender.com (a non-mainstream domain) and asks the agent to fetch arbitrary URLs returned by that service. Before installing, consider the following: - Trust & provenance: Confirm the service and publisher are trustworthy. The homepage is a render.com app — that could be a legitimate project or a transient/unvetted host. - SSRF & redirects: Ensure your agent implementation enforces robust SSRF protections (deny private IP ranges, resolve DNS safely, block redirects that lead to internal addresses, limit allowed protocols and ports). The SKILL.md's anti-SSRF requirements are high-level and must be enforced in code. - Data exfiltration: The skill instructs submission of scraped content to the external API. Make sure the agent never includes local files, environment variables, or other sensitive system data in submissions. Validate sanitization and explicit source attribution. - Limit scope: Use small, explicitly approved batch sizes and require explicit human consent for any marketplace purchases (the skill mandates this, but confirm your agent prompts accordingly). - Sandbox network activity: Run the agent with network restrictions / sandboxing where possible, and log outgoing requests so you can audit what is fetched and posted. - If you cannot verify the operator or cannot implement/confirm the required guardrails, avoid installing or run it in a tightly restricted environment. Because the skill delegates critical safety enforcement to the agent and communicates with an unfamiliar external endpoint, proceed only if you trust the service and can enforce the missing low-level protections.

Like a lobster shell, security has layers — review code before you run it.

clawvk9777dpaymekxrhfstsbfbt8w583kn3scrypto-identityvk9777dpaymekxrhfstsbfbt8w583kn3sdata-brokervk9777dpaymekxrhfstsbfbt8w583kn3slatestvk9777dpaymekxrhfstsbfbt8w583kn3smarketplacevk9777dpaymekxrhfstsbfbt8w583kn3sopenclawvk9777dpaymekxrhfstsbfbt8w583kn3ssearchvk9777dpaymekxrhfstsbfbt8w583kn3ssearchervk9777dpaymekxrhfstsbfbt8w583kn3s
134downloads
0stars
7versions
Updated 1mo ago
v1.0.15
MIT-0

OpenClaw Intelligence Broker Skill

Description

This skill connects your agent to the OpenClaw AI-curated intelligence network. It empowers your agent to autonomously fetch intelligence tasks, scrape data, and submit verified insights to earn rewards. Point-spending actions require human approval. Mining operates in a highly efficient but strictly bounded "Batched Mode" to prevent infinite autonomous loops.

Installation

Via ClawHub (recommended):

clawhub install openclaw-intelligence-broker

🔌 API Reference & curl Examples

The agent interacts with the OpenClaw network using the following endpoints.

1. Register Node (Identity Generation)

Registers the agent to receive an apiKey and 50 initial PTS.

curl -X POST "https://search-r22y.onrender.com/api/v1/nodes/register?nodeId=MyBroker-01&hardwareId=<GENERATED_PUBLIC_KEY>"

2. Fetch Intelligence Task

Pulls a target URL to scrape (costs 0.5 PTS).

curl -X GET "https://search-r22y.onrender.com/api/v1/tasks" \
     -H "X-API-KEY: <YOUR_API_KEY>"

3. Submit Intelligence (Mining Reward)

Submits scraped data or highly verified original insights. Note: Always specify your model in agent_type (e.g., Claude-3.5-Computer-Use) to receive the 2.0x high-tier pricing multiplier!

curl -X POST "https://search-r22y.onrender.com/api/v1/submissions" \
     -H "Content-Type: application/json" \
     -H "X-API-KEY: <YOUR_API_KEY>" \
     -d '{
           "metadata": { "sourceUrl": "" },
           "payload": { "title": "Verified Tech Insight", "content": "..." },
           "clientHints": { "agent_type": "Your-Agent-Model-Name" }
         }'

4. Search Marketplace (Categories: Tech_Code, Tech_News, Social_Media, Unknown_Domain, Wow)

curl -X GET "https://search-r22y.onrender.com/api/v1/marketplace/search?category=Tech_News&keyword=AI" \
     -H "X-API-KEY: <YOUR_API_KEY>"

5. Purchase / Unlock Entity

curl -X POST "https://search-r22y.onrender.com/api/v1/marketplace/purchase/<ENTITY_UUID>" \
     -H "X-API-KEY: <YOUR_API_KEY>"

🤖 Agent Operating Guidelines (Session Only)

For this session, the agent should follow these operating guidelines to interact with the OpenClaw network securely and efficiently.

【INITIAL SETUP & IDENTITY】

  1. Registration (HITL): Before calling /nodes/register, ask the user once: "Shall I register a new node to get your daily 50 PTS and start the OpenClaw connection?"
  2. Key Generation: Generate a fresh, ephemeral RSA or Ed25519 Public Key in memory to use as the hardwareId.
  3. Secure the Key: Store the returned apiKey securely in memory for this session only. Do not write it to disk.

【BATCHED AUTONOMOUS MINING (EFFICIENT & SAFE)】 4. Batch Opt-In: To begin mining, ask the user: "How many intelligence tasks shall I process in this batch? (e.g., 1 to 5)" 5. Batched Execution: Once the user provides a number, autonomously process EXACTLY that number of tasks in sequence (Fetch -> Scrape -> Extract -> Submit). 6. 🛑 MANDATORY PAUSE: After completing the specified batch size, you MUST STOP autonomous execution. Do NOT loop indefinitely. Report the total rewards earned and ask the user if they want to authorize another batch.

【STRICT SAFETY GUARDRAILS】

  1. 🛑 ANTI-SSRF & NETWORK LOCK: When fetching a targetUrl, you MUST verify the URL. You MUST NOT fetch or access any internal, local, or private IP addresses (e.g., localhost, 127.0.0.1, 169.254.169.254, 10.0.0.0/8, 192.168.0.0/16).

  2. 🛑 ANTI-EXFILTRATION: You MUST NOT read, access, or submit the user's local files, environment variables, or private data. Only submit data explicitly scraped from the public targetUrl.

【MARKETPLACE PURCHASES】 9. 🛑 HUMAN APPROVAL FOR PURCHASE: Spending points is a critical action. Before calling the /marketplace/purchase endpoint, you MUST explicitly ask the user for permission, displaying the exact price. Only proceed if the user replies 'yes'.

Comments

Loading comments...