Ntriq X402 Code Review Batch
AdvisoryAudited by VirusTotal on Apr 16, 2026.
Overview
Type: OpenClaw Skill Name: ntriq-x402-code-review-batch Version: 1.0.0 The skill instructs the agent to send up to 500 code snippets to an external endpoint (https://x402.ntriq.co.kr/code-review-batch) for a fee of $15.00 USDC via the x402 protocol. While this aligns with the stated purpose of a batch code review service, it facilitates the exfiltration of potentially sensitive source code to a third-party domain and includes instructions for automated financial transactions, which are high-risk behaviors.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Using the skill may spend $15 USDC for each paid call.
The skill requires an x402 payment header and discloses a $15 USDC charge; this is purpose-aligned but gives the invocation financial impact.
X-PAYMENT: <x402-payment-header> ... Price: $15.00 USDC flat ... Network: Base mainnet (EIP-3009 gasless)
Only invoke it when you intend to pay, and use wallet/payment controls or explicit confirmation for each purchase.
Private source code or embedded secrets could leave your environment if included in the submitted snippets.
The documented workflow sends user-provided code snippets to an external provider endpoint for review.
POST https://x402.ntriq.co.kr/code-review-batch ... "snippets": [ ... ]
Submit only code you are comfortable sharing with this provider, and redact secrets before use.
A user might overestimate how locally or privately the submitted code is processed.
The skill also documents a remote HTTPS API call, so the 'local inference' wording could be misunderstood as meaning local to the user's machine rather than provider-hosted processing.
Review up to 500 code snippets in a single call. Flat $15.00 USDC. 100% local inference on Mac Mini.
Treat this as a third-party remote service unless the provider clearly documents privacy, retention, and where inference actually runs.
