Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Brd Browser Debug

v1.0.0

Debug Bright Data Scraping Browser sessions using the Browser Sessions API. Use this skill when the user encounters a Bright Data browser session error, pupp...

0· 14·0 current·0 all-time
byMeir Kadosh@meirk-brd

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for meirk-brd/brightdata-brd-browser-debug.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Brd Browser Debug" (meirk-brd/brightdata-brd-browser-debug) from ClawHub.
Skill page: https://clawhub.ai/meirk-brd/brightdata-brd-browser-debug
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install brightdata-brd-browser-debug

ClawHub CLI

Package manager switcher

npx clawhub@latest install brightdata-brd-browser-debug
Security Scan
Capability signals
Requires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
The skill's stated purpose (debug Bright Data browser sessions) matches the actions described in SKILL.md (calling Bright Data Browser Sessions API, triage, bandwidth/captcha analysis). Needing a Bright Data API key is appropriate for this purpose. However, the package/registry metadata lists no required env vars while the SKILL.md explicitly requires BRIGHTDATA_API_KEY — this mismatch is unexpected.
!
Instruction Scope
SKILL.md gives concrete runtime instructions to fetch /browser_sessions and /browser_sessions/{id} using Authorization: Bearer $BRIGHTDATA_API_KEY, describes pagination and filtering, and auto-detection triggers from conversation context. Those instructions are scoped to the Bright Data API, but they reference an environment variable (BRIGHTDATA_API_KEY) that is not declared in the skill requirements — the agent will need that secret to run and the omission creates ambiguity about intended credential handling and permissions.
Install Mechanism
This is an instruction-only skill with no install spec or code files. That minimizes install-time risk (nothing is downloaded or written to disk).
!
Credentials
Functionally the skill only needs a single Bright Data API key, which is proportionate. However, the registry metadata does not declare BRIGHTDATA_API_KEY or a primaryEnv, so the skill asks for an undeclared secret in its runtime instructions — this mismatch should be resolved (declare the env var explicitly) before granting credentials.
Persistence & Privilege
always is false and the skill is user-invocable. It can be invoked autonomously by the agent (default), which is expected for debugging skills; there is no request to persist or alter other skills' configs.
What to consider before installing
The SKILL.md expects you to provide BRIGHTDATA_API_KEY and will call Bright Data's browser_sessions endpoints; however the registry metadata does not declare that env var. Before installing: (1) ask the publisher to correct the metadata (declare BRIGHTDATA_API_KEY as required and set primaryEnv if appropriate); (2) only provide an API key with the minimum permissions needed (rotate/revoke it easily); (3) prefer invoking the skill manually rather than leaving it always-enabled or allowing broad autonomous runs; (4) verify the skill author/trustworthiness since the skill will have access to your Bright Data session data; and (5) if you proceed, monitor API usage and audit logs for unexpected queries or large data exports.

Like a lobster shell, security has layers — review code before you run it.

latestvk97c3n6t1rp34aedd80mek75t585p654
14downloads
0stars
1versions
Updated 4h ago
v1.0.0
MIT-0

Bright Data — Browser Session Debugger

Diagnose Bright Data Scraping Browser sessions using the Browser Sessions API. Fetches live session data and performs smart triage: error diagnosis, bandwidth analysis, captcha reporting, and pattern detection across recent sessions.

Setup

Set your API key:

export BRIGHTDATA_API_KEY="your-api-key"

Get a key from Bright Data Dashboard → API Tokens.

No zone configuration needed — zone is returned as a field in session data.

Usage

List & triage recent sessions

Invoked as /brd-browser-debug with no arguments.

API reference: GET /browser_sessions

Fetching sessions

Start with a single call using limit=100 (the maximum) sorted by most recent:

GET https://api.brightdata.com/browser_sessions?limit=100&sort=timestamp&order=desc
Authorization: Bearer $BRIGHTDATA_API_KEY

Pagination: The response includes total, has_more, and next_offset. If has_more is true and the analysis requires more data (e.g. bandwidth outlier detection needs a larger sample), fetch the next page using offset=<next_offset>. Continue until you have enough data or has_more is false.

Available filters — apply when the user specifies a scope:

  • status=failed|finished|running — narrow to a specific session state
  • api_name=<zone> — filter to a specific Bright Data zone
  • target_url=<domain> — filter by target domain (e.g. ksp.co.il)
  • start_date / end_date — ISO 8601 datetime range
  • sort=timestamp|duration|bandwidth with order=asc|desc

If the user asks about a specific zone, date range, or domain — apply the relevant filter rather than fetching all sessions and filtering client-side.

Triage steps

  1. Present a health summary: total from the response, counts of finished / failed / running.
  2. Most recent session — always highlight it regardless of status (same detail level as single-session mode).
  3. Failed sessions — for each failure: session ID, timestamp, duration, bandwidth, then reason about the cause using the signals in the Diagnosing Failed Sessions section below.
  4. Pattern detection — if 3+ sessions share the same error.code, call it a systemic issue:

    "3 sessions failed with custom_headers — you are overriding a header Bright Data forbids. Remove page.setExtraHTTPHeaders() from your code."

  5. Bandwidth outliers — group sessions by target_url domain. For each domain with 3+ sessions, calculate the median bandwidth. Flag any session whose bandwidth exceeds 2× the median for that domain as an outlier, and note if it was a failed session that burned unusually high bandwidth before dying.
  6. Captcha activity — report how many sessions hit captchas and whether they were solved.
  7. Close with a one-line verdict: the most important finding and the most impactful fix.

Inspect a single session

Invoked as /brd-browser-debug <session_id>.

API reference: GET /browser_sessions/{session_id}

  1. Call:

    GET https://api.brightdata.com/browser_sessions/<session_id>
    Authorization: Bearer $BRIGHTDATA_API_KEY
    

    Returns 404 if the session ID is not found — tell the user and stop.

  2. Present a deep-dive using the response fields:

    • Status (status): running / finished / failed
    • Zone (api_name): the Bright Data zone that handled the session
    • Timestamp (timestamp): ISO 8601 — show in local-friendly format
    • Duration (duration): seconds (nullable) — flag if < 2 s on failure (session barely started)
    • Bandwidth (bandwidth): convert bytes → MB
    • Navigations (navigations): flag if 0 (nothing was loaded)
    • Captcha (captcha): one of solved / none / detected / faileddetected means a challenge appeared but was not solved; failed means solving was attempted but unsuccessful
    • Route: target_urlend_url — note significant drift (different domain, login wall, error page)
    • Error (error.code + error.message): reason about the cause using the signals in Diagnosing Failed Sessions below
  3. Close with a one-line verdict.

Auto-detect from conversation context

When a Bright Data browser issue appears in the conversation — including puppeteer stack traces, error codes, mention of brd.superproxy.io, the user describing a session failure, OR a scraper producing empty/unexpected results (e.g. "Found 0 categories", "Got 0 products", fewer items than expected):

  • If a session ID is visible in the output → run single-session deep-dive on it.
  • If no session ID is visible → run list & triage, filtering by the relevant target domain. Highlight the most recent session as the likely culprit.
  • Cross-reference the error or unexpected behavior seen in the conversation with what the API returns. A session that finished successfully with normal bandwidth but the scraper got 0 results points to a client-side selector/extraction bug, not a proxy issue.

Features

  • Smart triage: automatically groups sessions by failure pattern, not just lists them
  • Dynamic bandwidth outliers: compares sessions per domain using median, flags sessions exceeding 2× the median
  • Captcha reporting: shows captcha hit rate and solve rate
  • Error reasoning: reads session signals holistically to infer what went wrong
  • Zero config: reads API key from env var, no zone setup needed

Diagnosing Failed Sessions

Do not rely on the error code alone. Cross-reference all available session signals to reason about what went wrong:

  • Duration + navigations: a session that failed in < 2 s with 0 navigations never got past the connection phase — likely a configuration or auth issue. A session that ran for minutes before failing points to a runtime problem (blocked mid-scrape, idle timeout, network drop).
  • Bandwidth relative to other sessions: a failed session that consumed bandwidth similar to successful ones likely reached the target but failed during extraction. A failed session with near-zero bandwidth never loaded anything.
  • Captcha field: if captcha is detected but not solved, the session was stopped by an unsolved challenge — suggest enabling captcha solving on the zone.
  • target_url vs end_url: significant drift (different domain, login page, error page) means the session was redirected away from the intended target.
  • error.message: use the raw message text as-is to describe what happened — do not guess or invent meaning beyond what the message says. If the cause is unclear, direct the user to Bright Data support.

Comments

Loading comments...