Pywayne Llm Chat Window
AdvisoryAudited by Static analysis on Apr 30, 2026.
Overview
No suspicious patterns detected.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If a real API key is pasted into code or shared, others could use the user's LLM account or consume quota.
The skill requires a provider API key for the LLM chat workflow. This is purpose-aligned, but it is still sensitive account access.
api_key="your_api_key" ... | `api_key` | required | API key |
Use a provider-specific key with the least necessary access, store it outside shared code when possible, and rotate it if exposed.
Messages entered into the chat may be sent to the configured LLM provider and handled under that provider's policies.
The chat window is configured to send conversation requests to an external OpenAI-compatible API provider. This is expected for an LLM chat client, but it means chat content leaves the local machine.
base_url="https://api.deepseek.com/v1", api_key="your_api_key", model="deepseek-chat"
Avoid sending private or regulated information unless the chosen provider and endpoint are trusted and appropriate.
The reviewed skill text looks benign, but the safety of the actual `pywayne.llm.chat_window` implementation depends on where it is obtained from.
The reviewed artifact is only documentation and does not include the implementation or an install source for the referenced module, so the actual package behavior was not inspected here.
Source: unknown; Homepage: none ... No install spec — this is an instruction-only skill ... No code files present
Install the referenced package only from a trusted source and review its code or provenance before using real API keys.
