Pywayne Llm Chat Window

AdvisoryAudited by Static analysis on Apr 30, 2026.

Overview

No suspicious patterns detected.

Findings (0)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

If a real API key is pasted into code or shared, others could use the user's LLM account or consume quota.

Why it was flagged

The skill requires a provider API key for the LLM chat workflow. This is purpose-aligned, but it is still sensitive account access.

Skill content
api_key="your_api_key" ... | `api_key` | required | API key |
Recommendation

Use a provider-specific key with the least necessary access, store it outside shared code when possible, and rotate it if exposed.

What this means

Messages entered into the chat may be sent to the configured LLM provider and handled under that provider's policies.

Why it was flagged

The chat window is configured to send conversation requests to an external OpenAI-compatible API provider. This is expected for an LLM chat client, but it means chat content leaves the local machine.

Skill content
base_url="https://api.deepseek.com/v1", api_key="your_api_key", model="deepseek-chat"
Recommendation

Avoid sending private or regulated information unless the chosen provider and endpoint are trusted and appropriate.

What this means

The reviewed skill text looks benign, but the safety of the actual `pywayne.llm.chat_window` implementation depends on where it is obtained from.

Why it was flagged

The reviewed artifact is only documentation and does not include the implementation or an install source for the referenced module, so the actual package behavior was not inspected here.

Skill content
Source: unknown; Homepage: none ... No install spec — this is an instruction-only skill ... No code files present
Recommendation

Install the referenced package only from a trusted source and review its code or provenance before using real API keys.