Agent Compete Scope
PassAudited by VirusTotal on May 11, 2026.
Overview
Type: OpenClaw Skill Name: agent-compete-scope Version: 1.0.2 The CompeteScope bundle is a legitimate competitor analysis tool that uses the Tavily API for web searching and various LLM providers (Groq, Anthropic, Gemini) for data synthesis. The code logic in src/analyzer.ts and src/fetcher.ts is well-structured, follows the stated purpose in SKILL.md, and contains no evidence of data exfiltration, malicious execution, or prompt injection. All external network calls are directed to official API endpoints (api.tavily.com, api.groq.com, etc.), and dependencies in package.json are standard industry libraries.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Installing or running the code may require giving it API keys that can incur usage charges or access provider accounts.
The skill requires provider API keys for its core search and LLM functions, even though the registry metadata declares no credentials or required environment variables.
- **Groq API Key**: LLM (Llama 3.3) - **Tavily API Key**: 웹 검색
Use dedicated, least-privilege API keys with spending limits, and treat the registry credential metadata as incomplete.
Product descriptions, competitor names, and analysis context may be shared with third-party AI/search services.
The default LLM adapter sends the generated prompts, including product and competitor-analysis context, to an external LLM provider.
baseURL: 'https://api.groq.com/openai/v1',
...
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userMessage },
]Avoid putting confidential product strategy or non-public competitor information into this skill unless those providers are approved for that data.
A malicious or SEO-manipulated page could bias the competitor profile or recommendations returned to the user.
Web-search result content is inserted directly into the LLM prompt, so instructions or misleading text from web pages could influence the model’s analysis.
.map((a, i) => `[${i + 1}] ${a.title}\nURL: ${a.url}\n내용: ${a.content}`)
...
const userMessage = `경쟁사: ${competitor}\n\n수집된 정보:\n${articlesText}`;Treat the output as advisory, verify cited sources, and prefer adding prompt-injection defenses or source filtering if this is used for important strategy decisions.
A user following the README will install npm dependencies and configure secrets outside what the registry requirements disclose.
The README documents npm-based setup and .env configuration despite the registry saying there is no install spec; package-lock.json is present, but users should notice the undeclared setup path.
1. `cp .env.example .env` 2. `.env` 파일에 API 키 입력 3. `npm install`
Review the package files before installing, install from a trusted environment, and expect to manage a local .env file.
Running the demo entry point can trigger search and LLM API calls using configured credentials, even without submitting a real job.
The included start entry point simulates a job automatically after three seconds rather than waiting only for a real user-supplied ACP job.
setTimeout(async () => {
const mockJob = {
id: 'job-compete-123',
payload: {
my_product: "AI 기반 자동화 마케팅 툴",
competitors: ["Competitor A", "Competitor B"],
focus: "all"
},
...
await callback(mockJob);
}, 3000);Run it first in a test environment, remove or disable the mock job for production use, and monitor API usage.
