suspicious.env_credential_access
- Location
- src/fetcher.ts:8
- Finding
- Environment variable access combined with network send.
AdvisoryAudited by Static analysis on May 10, 2026.
Detected: suspicious.env_credential_access
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Users may not realize from the registry metadata that running the skill requires external service credentials and may consume API quota.
The skill needs provider API keys for its advertised search and LLM functions, while the registry metadata lists no required env vars or primary credential.
- **Groq API Key**: LLM (Llama 3.3) - **Tavily API Key**: 웹 검색
Declare `TAVILY_API_KEY`, `GROQ_API_KEY`, and optional LLM provider keys in metadata, and use keys with minimal necessary scope or quota limits.
Running `npm start` may immediately make external API calls and use quota for the sample keywords.
Starting the included mock agent schedules a hardcoded test job that can call Tavily and the configured LLM without a separate user-supplied payload.
setTimeout(async () => { const mockJob = { ... payload: { keywords: ['AI Agent', 'DeFi'], timeframe: '7d', region: 'global' } ... }; await callback(mockJob); }, 3000);Treat `npm start` as an active test run, or remove/disable the mock auto-job in production packaging.
Users have less external context for verifying who maintains the package or where updates come from.
The package has limited provenance information. It does include a package lock and standard npm dependencies, so this is a provenance note rather than evidence of unsafe install behavior.
Source: unknown Homepage: none
Prefer installing from a reviewed package version, verify the lockfile/dependencies, and publish a source repository or homepage if available.
Search results can influence the model’s classification and explanation, so public web content may bias or confuse the output.
Untrusted web-search result titles, URLs, and content are fed into the LLM context to produce trend classifications.
const articlesText = articles.map((a, i) => `[${i + 1}] ${a.title}\nURL: ${a.url}\n내용: ${a.content}`) ... const raw = await callLLM(systemPrompt, userMessage);Treat trend classifications as advisory, review evidence URLs, and consider adding prompt-injection-resistant filtering or source-quality checks.