Pywayne Llm Chat Bot

v0.1.0

LLM chat interface using OpenAI-compatible APIs with streaming support and session management. Use when working with pywayne.llm.chat_bot module for creating...

0· 513·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (LLM chat interface) matches the instructions: examples show creating LLMChat/ChatManager with base_url, api_key, model, streaming and session management. There are no unrelated required binaries or env vars in metadata.
Instruction Scope
Instructions are limited to using the pywayne.llm.chat_bot API and manipulating session history and system prompts; they do not instruct reading local files or unrelated credentials. Minor caveat: the documentation includes examples that set/update system prompts (e.g., "You are now a Python expert"), which can be used to steer model behavior — treat system prompts carefully, especially if sourced from untrusted input.
Install Mechanism
No install spec and no code files (instruction-only). Nothing will be written to disk by an install step in the skill package itself.
Credentials
The skill metadata lists no required environment variables or primary credential, which is consistent with an instruction-only doc. The examples do expect an api_key and base_url to be provided when instantiating classes — this is normal, but the skill does not itself request or declare storage/access for those secrets, so you must supply them at runtime and ensure they go only to trusted endpoints.
Persistence & Privilege
always is false and default invocation settings apply. The skill does not request persistent/privileged platform presence.
Scan Findings in Context
[you-are-now] expected: The phrase appears in example system prompts (e.g., dynamic system prompt update). This is commonly used to influence model behavior and is expected in chat SDK docs, but it is also a known prompt-injection pattern — exercise caution if system prompts are taken from untrusted sources or remote endpoints.
Assessment
This SKILL.md reads like legitimate documentation for a client library that connects to OpenAI-compatible endpoints. Before using it: 1) only provide API keys to base_url endpoints you control or trust; verify the upstream package (pywayne.llm.chat_bot) comes from a reputable source because the skill has no homepage or source link; 2) treat dynamic/system prompts as sensitive — don't accept system prompts from untrusted users or remote services, since they can alter model behavior; 3) because this skill is instruction-only, installing it does not write code to disk, but actually importing/using the pywayne package in your environment still requires you to vet that package separately.

Like a lobster shell, security has layers — review code before you run it.

latestvk97dfrhss7ydtcatzgfz46yn0n818vb9

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments