Cryptocurrency Trader

AdvisoryAudited by Static analysis on Apr 30, 2026.

Overview

No suspicious patterns detected.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

A user could rely too heavily on AI-generated trade recommendations and lose money in volatile crypto markets.

Why it was flagged

The skill gives actionable crypto trading signals and uses strong reliability language, which could lead users to over-trust outputs in a high-loss financial context even though the documentation also includes risk warnings.

Skill content
Production-grade AI trading agent... zero-hallucination tolerance... real-world trading application
Recommendation

Use the output only as one analysis input, verify independently, avoid authorizing automatic trades, and never risk funds you cannot afford to lose.

What this means

Installing dependencies from an unreviewed package can add code to the local environment.

Why it was flagged

The skill requires a manual Python dependency install despite the registry showing no install spec; this is normal for Python tools but means dependency provenance should be checked by the user.

Skill content
Required packages installed: `pip install -r requirements.txt`
Recommendation

Review requirements.txt and install in a virtual environment before running the skill.

What this means

If used, the skill can consume a user's LLM provider account/API quota and relies on those credentials being available.

Why it was flagged

An optional LLM assistant component can use OpenAI or Anthropic API keys, while the registry declares no required credentials. This appears purpose-aligned but should be explicitly understood.

Skill content
openai.api_key = os.getenv('OPENAI_API_KEY') ... anthropic.Anthropic(api_key=os.getenv('ANTHROPIC_API_KEY'))
Recommendation

Use scoped API keys, monitor provider usage, and do not expose unrelated credentials in the environment.

What this means

Trading questions, balances, or strategy details included in chat may be processed by a third-party LLM provider.

Why it was flagged

The optional conversational assistant stores user messages in conversation history and sends analysis context to an external LLM provider when used.

Skill content
Supports OpenAI GPT-4 and Anthropic Claude APIs ... self.conversation_history.append({'role': 'user', 'content': user_message}) ... response = self._get_llm_response(system_context)
Recommendation

Avoid entering sensitive financial details unless needed, and review the provider's privacy and retention settings.