Agent Compete Scope
ReviewAudited by ClawScan on May 10, 2026.
Overview
Prompt-injection indicators were detected in the submitted artifacts (system-prompt-override); human review is required before treating this skill as clean.
This skill appears purpose-aligned and not malicious. Before installing, be aware that the registry metadata is incomplete: the README/code expect npm setup and API keys. Use dedicated keys, avoid confidential inputs unless third-party providers are approved, and disable the mock auto-job if you deploy it beyond testing. ClawScan detected prompt-injection indicators (system-prompt-override), so this skill requires review even though the model response was benign.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Installing or running the code may require giving it API keys that can incur usage charges or access provider accounts.
The skill requires provider API keys for its core search and LLM functions, even though the registry metadata declares no credentials or required environment variables.
- **Groq API Key**: LLM (Llama 3.3) - **Tavily API Key**: 웹 검색
Use dedicated, least-privilege API keys with spending limits, and treat the registry credential metadata as incomplete.
Product descriptions, competitor names, and analysis context may be shared with third-party AI/search services.
The default LLM adapter sends the generated prompts, including product and competitor-analysis context, to an external LLM provider.
baseURL: 'https://api.groq.com/openai/v1',
...
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userMessage },
]Avoid putting confidential product strategy or non-public competitor information into this skill unless those providers are approved for that data.
A malicious or SEO-manipulated page could bias the competitor profile or recommendations returned to the user.
Web-search result content is inserted directly into the LLM prompt, so instructions or misleading text from web pages could influence the model’s analysis.
.map((a, i) => `[${i + 1}] ${a.title}\nURL: ${a.url}\n내용: ${a.content}`)
...
const userMessage = `경쟁사: ${competitor}\n\n수집된 정보:\n${articlesText}`;Treat the output as advisory, verify cited sources, and prefer adding prompt-injection defenses or source filtering if this is used for important strategy decisions.
A user following the README will install npm dependencies and configure secrets outside what the registry requirements disclose.
The README documents npm-based setup and .env configuration despite the registry saying there is no install spec; package-lock.json is present, but users should notice the undeclared setup path.
1. `cp .env.example .env` 2. `.env` 파일에 API 키 입력 3. `npm install`
Review the package files before installing, install from a trusted environment, and expect to manage a local .env file.
Running the demo entry point can trigger search and LLM API calls using configured credentials, even without submitting a real job.
The included start entry point simulates a job automatically after three seconds rather than waiting only for a real user-supplied ACP job.
setTimeout(async () => {
const mockJob = {
id: 'job-compete-123',
payload: {
my_product: "AI 기반 자동화 마케팅 툴",
competitors: ["Competitor A", "Competitor B"],
focus: "all"
},
...
await callback(mockJob);
}, 3000);Run it first in a test environment, remove or disable the mock job for production use, and monitor API usage.
