!
Purpose & Capability
The code, README, and package.json show the agent legitimately needs LLM API keys (GROQ_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY) and a Tavily API key for news — this is consistent with a sentiment-analysis agent. However, registry metadata claimed no required env vars while the source loads a .env and expects those keys. The mismatch between declared requirements and actual code is an incoherence that should be resolved before trusting the skill.
!
Instruction Scope
The SKILL.md is minimal, but the included code (fetcher/analyzer/llm adapters) instructs runtime behavior: loading ../.env, calling external APIs (CoinGecko, Tavily) and LLM services, and enforcing strict system prompts for LLM output. The code reads a .env file directly (potentially accessing any secrets placed there) and logs presence of API keys. The runtime instructions therefore go beyond what's visible in the SKILL.md front matter and metadata.
ℹ
Install Mechanism
There is no formal install spec in registry metadata (instruction-only), but the package.json and package-lock.json indicate npm dependencies (@anthropic-ai/sdk, openai, dotenv). Installing would pull packages from the public npm registry (moderate risk). No external arbitrary download URLs or archive extraction were found.
!
Credentials
The code requires multiple secret API keys (GROQ_API_KEY, TAVILY_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY depending on provider selection). These are proportionate to needing LLM and news services, but the skill: (a) did not declare required env vars in metadata, (b) automatically loads ../.env (which may contain other unrelated secrets), and (c) logs API key presence — increasing risk of accidental leakage. The skill also prints LLM_PROVIDER and API key presence to stdout which may appear in logs.
✓
Persistence & Privilege
always is false; the skill does not request persistent platform-level privileges or modify other skills. It does not claim or require 'always: true' and does not attempt to change agent/system configuration outside its own code.
Scan Findings in Context
[system-prompt-override] expected: The pre-scan flagged 'system-prompt-override' in SKILL.md. The code explicitly constructs system prompts for the LLM (e.g., forcing strict JSON output), which is expected behavior for an LLM-driven analyzer. This pattern can be legitimate here, but it is also the mechanism by which a skill can try to coerce model behavior; review the prompts and ensure they don't instruct the model to ignore host policies or exfiltrate data.
What to consider before installing
What to consider before installing:
- Metadata mismatch: The registry lists no required env vars, but the code expects and uses LLM API keys (GROQ_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY) and a TAVILY_API_KEY. Do not provide production secrets until you confirm which keys are actually needed.
- .env loading: The code explicitly loads ../.env. That will read any secrets placed there; avoid sharing a .env containing unrelated credentials. Prefer running in a disposable/sandbox environment.
- Network calls: The agent calls external services (CoinGecko, Tavily, and LLM provider APIs). Confirm you trust those endpoints and the privacy of any data you send (news text, token names, and any contextual logs).
- Logging: The code logs API key presence and provider names; logs may expose metadata about your keys. Consider removing or redacting such logging before running in production.
- Dependencies: Installation will pull packages from npm (openai, @anthropic-ai/sdk, dotenv). Audit dependencies or install in an isolated environment.
- Prompt behavior: The skill forces strict JSON system prompts to the LLM. This is normal for structured output but could be abused to coerce models; review the system prompts in analyzer.ts to ensure they do not request data exfiltration or to override host policies.
Recommended actions:
1) Inspect and sanitize the .env.example and any .env you plan to use; only include the minimal keys needed.
2) Run the skill in a sandboxed environment (no access to sensitive .env or production networks) and monitor outbound traffic.
3) If you intend to use your own LLM keys, prefer creating dedicated limited-permission API keys.
4) Ask the publisher to update registry metadata to declare required env vars and to remove any logging of secret presence.
Given the undisclosed secret access and metadata mismatch, treat this skill as suspicious until those issues are resolved.