Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Amazon Review Workbook

v1.0.3

Collect all customer reviews from an Amazon product URL or product-reviews URL through a logged-in Chrome session on port 9222, export a 14-column factual wo...

0· 104·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for aduo6668/amazon-review-workbook.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Amazon Review Workbook" (aduo6668/amazon-review-workbook) from ClawHub.
Skill page: https://clawhub.ai/aduo6668/amazon-review-workbook
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install amazon-review-workbook

ClawHub CLI

Package manager switcher

npx clawhub@latest install amazon-review-workbook
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
Name/description match the included scripts: the code scrapes Amazon review pages through a Chrome remote-debugging session (localhost:9222), builds factual JSON/workbooks, offers optional DeepLX translation, and provides tagging/merge tooling. Nothing in the repository requests unrelated cloud credentials or surprising capabilities.
Instruction Scope
SKILL.md instructs the agent/operator to run the included Python CLI scripts and to launch Chrome with --remote-debugging-port=9222 using a profile logged into Amazon. This is coherent with the scraping use case, but connecting to a logged-in Chrome profile exposes that browser session (cookies, authenticated views) to the script via the Chrome DevTools Protocol — the user should understand that the script will access the pages and session state available to that profile.
Install Mechanism
There is no automated install spec; this is an instruction-only skill with bundled Python scripts. Dependencies are documented (pandas, openpyxl, requests, websocket-client) and must be installed by the operator. No remote binary downloads or installers are present.
Credentials
Registry metadata lists no required env vars, but the code supports optional DEEPLX_API_URL and DEEPLX_API_KEY (read from environment or .env files) for translation. The scripts will read those specific values and will POST review text to the configured DeepLX host if set. That behavior is expected for optional translation, but the metadata omission is an inconsistency and users must avoid putting sensitive secrets into repository-tracked .env files and should trust any external translation endpoint they configure.
Persistence & Privilege
The skill does not request permanent/always-on inclusion and does not modify other skills. It writes output artifacts and an SQLite cache under the chosen output directory (default amazon-review-output). Those writable files are normal for this workflow.
Assessment
This skill appears to do what it claims: scrape Amazon reviews via a locally running, logged-in Chrome session and produce deliverable spreadsheets. Before using it: 1) Understand that you must launch Chrome with remote debugging and a profile logged into Amazon — the script can access that browser session (cookies, authenticated pages). Only run it on a machine/profile you trust to be used for scraping. 2) If you enable automatic translation, you must set DEEPLX_API_URL (and optionally DEEPLX_API_KEY); translations will be POSTed to that URL, so only configure trusted endpoints and avoid committing real .env files with secrets into git. 3) Install the documented Python dependencies and run unit tests if desired. 4) The registry metadata did not declare the optional DeepLX env vars—treat that as a minor metadata inconsistency and review the deeplx_translate.py file and any .env before use. If you want extra assurance, inspect/grep the bundled scripts for network calls (requests, websocket usage) and run the 'doctor' command on a harmless product URL first to observe behavior.
scripts/amazon_review_workbook.py:741
Dynamic code execution detected.
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

amazonvk972bsb4krb51p6sfmcz6xzzsh844t52automationvk972bsb4krb51p6sfmcz6xzzsh844t52latestvk972bsb4krb51p6sfmcz6xzzsh844t52reviewsvk972bsb4krb51p6sfmcz6xzzsh844t52translationvk972bsb4krb51p6sfmcz6xzzsh844t52workbookvk972bsb4krb51p6sfmcz6xzzsh844t52
104downloads
0stars
3versions
Updated 3w ago
v1.0.3
MIT-0

Amazon Review Workbook

Turn an Amazon product or review link into a two-phase delivery workbook.

This skill is designed to be portable: the scripts live inside the skill folder and do not depend on dashcamauto or any other local repo.

Quick Path

  1. If this is the first run on a machine, read references/setup.md.
  2. Run a quick health check:
python scripts/amazon_review_workbook.py doctor --url "<amazon-url>"
  1. Run factual collection:
python scripts/amazon_review_workbook.py intake --url "<amazon-url>" --output-dir "<workspace>/amazon-review-output"
  1. If DeepLX is configured and reachable, fill 评论中文版:
python scripts/amazon_review_workbook.py translate --input-json "<workspace>/amazon-review-output/amazon_<asin>_review_rows_factual.json" --output-dir "<workspace>/amazon-review-output"
  1. Check coverage before deciding whether keyword expansion is worth the extra requests:
python scripts/amazon_review_workbook.py coverage-check --url "<amazon-url>" --db-path "<workspace>/amazon-review-output/amazon_review_cache.sqlite3"
  1. Build canonical tags and a lightweight tagging payload:
python scripts/amazon_review_workbook.py taxonomy-bootstrap --input-json "<workspace>/amazon-review-output/amazon_<asin>_review_rows_translated.json" --output-dir "<workspace>/amazon-review-output"
python scripts/amazon_review_workbook.py prepare-tagging --input-json "<workspace>/amazon-review-output/amazon_<asin>_review_rows_translated.json" --output-dir "<workspace>/amazon-review-output" --canonical-tags-json "<workspace>/amazon-review-output/canonical_tags.json"

taxonomy-bootstrap is only for building a stable canonical vocabulary for the batch. prepare-tagging consumes the full factual or translated JSON and emits a trimmed *_tagging_input.json that contains pending rows only plus cache metadata. Do not use that trimmed file as the merge source.

  1. Read references/tagging-guidelines.md, let the model fill only the pending rows in a separate labels JSON, then merge the labels back into the full base JSON and build the final workbook:
python scripts/amazon_review_workbook.py merge-build --base-json "<workspace>/amazon-review-output/amazon_<asin>_review_rows_translated.json" --labels-json "<workspace>/amazon-review-output/amazon_<asin>_labels.json" --output-dir "<workspace>/amazon-review-output" --taxonomy-version "v1" --strict

Workflow

1. Verify prerequisites

  • Confirm doctor reports a valid asin.
  • Confirm chrome_debug_ready is true.
  • If you plan to use translate, confirm deeplx_env_ready is true.
  • If deeplx_reachable is false, do not block the workflow; let the model fill 评论中文版 during tagging.

If any of these fail, read references/setup.md before continuing.

2. Use the smallest command that fits

  • For raw review collection only: use collect
  • For factual extraction plus workbook scaffolding: use intake
  • For deciding whether a keyword pass is still needed: use coverage-check
  • For rebuilding the tuned keyword state from historical data: use keyword-autotune
  • For machine translation of 评论中文版: use translate
  • For canonical tag sampling: use taxonomy-bootstrap
  • For cache-aware lightweight model input: use prepare-tagging
  • For writing the final labeled workbook: use merge-build

Examples:

python scripts/amazon_review_workbook.py collect --url "<amazon-url>" --output-dir "<workspace>/amazon-review-output"
python scripts/amazon_review_workbook.py translate --input-json "<workspace>/amazon-review-output/amazon_<asin>_review_rows_factual.json" --output-dir "<workspace>/amazon-review-output"
python scripts/amazon_review_workbook.py coverage-check --url "<amazon-url>" --db-path "<workspace>/amazon-review-output/amazon_review_cache.sqlite3"
python scripts/amazon_review_workbook.py keyword-autotune --output-dir "<workspace>/amazon-review-output" --db-path "<workspace>/amazon-review-output/amazon_review_cache.sqlite3"
python scripts/amazon_review_workbook.py taxonomy-bootstrap --input-json "<workspace>/amazon-review-output/amazon_<asin>_review_rows_translated.json" --output-dir "<workspace>/amazon-review-output"
python scripts/amazon_review_workbook.py prepare-tagging --input-json "<workspace>/amazon-review-output/amazon_<asin>_review_rows_translated.json" --output-dir "<workspace>/amazon-review-output" --canonical-tags-json "<workspace>/amazon-review-output/canonical_tags.json"
python scripts/amazon_review_workbook.py merge-build --base-json "<workspace>/amazon-review-output/amazon_<asin>_review_rows_translated.json" --labels-json "<workspace>/amazon-review-output/amazon_<asin>_labels.json" --output-dir "<workspace>/amazon-review-output" --taxonomy-version "v1" --strict

3. Keep the workbook stable

The factual and final workbooks always use the 14-column schema in references/output-schema.md.

Do not silently add or remove columns. If a field is unavailable from the page, leave it blank rather than inventing a value.

4. Tag rows only after grounding on the factual file

The model should not invent from the product page alone. Ground semantic tagging on the factual JSON/workbook created by intake or translate.

Keep the two JSON shapes distinct:

  • *_tagging_input.json from prepare-tagging is the cropped machine prompt payload for the model
  • --base-json for merge-build must be the full factual/translated record set, not the cropped tagging payload
  • --labels-json is the model's completed semantic output for the pending rows only

If translate prints translation_mode=model_fallback, fill 评论中文版 in the same tagging pass instead of waiting for DeepLX.

Use references/tagging-guidelines.md when filling:

  • 评论概括
  • 情感倾向
  • 类别分类
  • 标签
  • 重点标记

The preferred fast path is:

  1. taxonomy-bootstrap to build a canonical tag vocabulary for this batch
  2. prepare-tagging to create a minimal pending-row payload
  3. model labeling only for pending rows, written into a separate labels JSON
  4. merge-build to update cache and export the final workbook from the full base JSON

Collection Defaults

  • intake and collect no longer run keyword expansion implicitly in deep mode. deep now means the 18 combo pass only.
  • Run coverage-check after intake to compare current rows vs Amazon's visible reviews count before deciding to spend more requests.
  • Use --keywords only when you explicitly want a keyword pass.
  • Use --keywords with no values to run the built-in keyword preset for the selected --keyword-profile.
  • Use --keywords foo bar baz to provide an explicit keyword list.
  • Default pacing now inserts a 2.5s gap between combos/keywords to reduce rate-limit risk.
  • Built-in profiles:
    • generic: universal consumer-product terms
    • electronics: universal terms + common app/setup/hardware terms
    • dashcam: electronics profile + recording/night/parking/GPS/Wi-Fi/mount terms
  • Default keyword reuse policy is successful: keywords that have produced results before are skipped on later runs; recent zero-result keywords are also suppressed for 72h to avoid immediate retries.
  • If you really want to brute-force rerun every keyword, use --keyword-reuse-scope none.
  • A tuned state file at <output-dir>/keyword_tuning_state.json is now read automatically when present, and refreshed after keyword runs so the skill gradually reorders towards higher-yield terms.
  • keyword-autotune can also ingest old keyword-run JSON reports via --report-glob to seed the tuned state from historical experiments.

Failure Boundaries

Do not claim success if any of these is true:

  • The script did not reach a real review page.
  • The expected XLSX/CSV for the current phase was not generated.
  • Review links, review time, or helpful votes were guessed rather than extracted.
  • The model tagged rows without first grounding on the factual JSON/workbook.
  • The cropped *_tagging_input.json was used as --base-json for merge-build.
  • The model re-labeled rows that were already cached for the same taxonomy version.
  • The workflow still claims a 13-column contract after 评论用户名 was added as a real output column.

Resources

Comments

Loading comments...