Agent Trend Radar

ReviewAudited by ClawScan on May 10, 2026.

Overview

Prompt-injection indicators were detected in the submitted artifacts (system-prompt-override); human review is required before treating this skill as clean.

Before installing or running, create provider API keys intentionally, expect your keywords and retrieved snippets to go to Tavily and the selected LLM provider, and avoid `npm start` if you do not want the included mock sample job to consume API quota. ClawScan detected prompt-injection indicators (system-prompt-override), so this skill requires review even though the model response was benign.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Users may not realize from the registry metadata that running the skill requires external service credentials and may consume API quota.

Why it was flagged

The skill needs provider API keys for its advertised search and LLM functions, while the registry metadata lists no required env vars or primary credential.

Skill content
- **Groq API Key**: LLM (Llama 3.3)
- **Tavily API Key**: 웹 검색
Recommendation

Declare `TAVILY_API_KEY`, `GROQ_API_KEY`, and optional LLM provider keys in metadata, and use keys with minimal necessary scope or quota limits.

What this means

Running `npm start` may immediately make external API calls and use quota for the sample keywords.

Why it was flagged

Starting the included mock agent schedules a hardcoded test job that can call Tavily and the configured LLM without a separate user-supplied payload.

Skill content
setTimeout(async () => { const mockJob = { ... payload: { keywords: ['AI Agent', 'DeFi'], timeframe: '7d', region: 'global' } ... }; await callback(mockJob); }, 3000);
Recommendation

Treat `npm start` as an active test run, or remove/disable the mock auto-job in production packaging.

What this means

Users have less external context for verifying who maintains the package or where updates come from.

Why it was flagged

The package has limited provenance information. It does include a package lock and standard npm dependencies, so this is a provenance note rather than evidence of unsafe install behavior.

Skill content
Source: unknown
Homepage: none
Recommendation

Prefer installing from a reviewed package version, verify the lockfile/dependencies, and publish a source repository or homepage if available.

What this means

Search results can influence the model’s classification and explanation, so public web content may bias or confuse the output.

Why it was flagged

Untrusted web-search result titles, URLs, and content are fed into the LLM context to produce trend classifications.

Skill content
const articlesText = articles.map((a, i) => `[${i + 1}] ${a.title}\nURL: ${a.url}\n내용: ${a.content}`) ... const raw = await callLLM(systemPrompt, userMessage);
Recommendation

Treat trend classifications as advisory, review evidence URLs, and consider adding prompt-injection-resistant filtering or source-quality checks.