Pywayne Llm Chat Bot
LLM chat interface using OpenAI-compatible APIs with streaming support and session management. Use when working with pywayne.llm.chat_bot module for creating...
MIT-0 · Free to use, modify, and redistribute. No attribution required.
⭐ 0 · 406 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
OpenClaw
Benign
high confidencePurpose & Capability
Name/description (LLM chat interface) matches the instructions: examples show creating LLMChat/ChatManager with base_url, api_key, model, streaming and session management. There are no unrelated required binaries or env vars in metadata.
Instruction Scope
Instructions are limited to using the pywayne.llm.chat_bot API and manipulating session history and system prompts; they do not instruct reading local files or unrelated credentials. Minor caveat: the documentation includes examples that set/update system prompts (e.g., "You are now a Python expert"), which can be used to steer model behavior — treat system prompts carefully, especially if sourced from untrusted input.
Install Mechanism
No install spec and no code files (instruction-only). Nothing will be written to disk by an install step in the skill package itself.
Credentials
The skill metadata lists no required environment variables or primary credential, which is consistent with an instruction-only doc. The examples do expect an api_key and base_url to be provided when instantiating classes — this is normal, but the skill does not itself request or declare storage/access for those secrets, so you must supply them at runtime and ensure they go only to trusted endpoints.
Persistence & Privilege
always is false and default invocation settings apply. The skill does not request persistent/privileged platform presence.
Scan Findings in Context
[you-are-now] expected: The phrase appears in example system prompts (e.g., dynamic system prompt update). This is commonly used to influence model behavior and is expected in chat SDK docs, but it is also a known prompt-injection pattern — exercise caution if system prompts are taken from untrusted sources or remote endpoints.
Assessment
This SKILL.md reads like legitimate documentation for a client library that connects to OpenAI-compatible endpoints. Before using it: 1) only provide API keys to base_url endpoints you control or trust; verify the upstream package (pywayne.llm.chat_bot) comes from a reputable source because the skill has no homepage or source link; 2) treat dynamic/system prompts as sensitive — don't accept system prompts from untrusted users or remote services, since they can alter model behavior; 3) because this skill is instruction-only, installing it does not write code to disk, but actually importing/using the pywayne package in your environment still requires you to vet that package separately.Like a lobster shell, security has layers — review code before you run it.
Current versionv0.1.0
Download ziplatest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
SKILL.md
Pywayne LLM Chat Bot
This module provides a synchronous LLM chat interface compatible with OpenAI APIs (including local servers like Ollama).
Quick Start
from pywayne.llm.chat_bot import LLMChat
# Create chat instance
chat = LLMChat(
base_url="https://api.example.com/v1",
api_key="your_api_key",
model="deepseek-chat"
)
# Single-turn conversation (non-streaming)
response = chat.ask("Hello, LLM!", stream=False)
print(response)
# Streaming response
for token in chat.ask("Explain recursion", stream=True):
print(token, end='', flush=True)
Multi-turn Conversation
# Use chat() for history tracking
for token in chat.chat("What is a class in Python?"):
print(token, end='', flush=True)
# Continuation - remembers previous context
for token in chat.chat("How do I define a constructor?"):
print(token, end='', flush=True)
# View history
for msg in chat.history:
print(f"{msg['role']}: {msg['content']}")
# Clear history
chat.clear_history()
Configuration
LLMConfig Class
from pywayne.llm.chat_bot import LLMConfig
config = LLMConfig(
base_url="https://api.example.com/v1",
api_key="your_api_key",
model="deepseek-chat",
temperature=0.7,
max_tokens=8192,
top_p=1.0,
frequency_penalty=0.0,
presence_penalty=0.0,
system_prompt="You are a helpful assistant"
)
chat = LLMChat(**config.to_dict())
Dynamic System Prompt Update
chat.update_system_prompt("You are now a Python expert, provide code examples")
Managing Multiple Sessions
from pywayne.llm.chat_bot import ChatManager
manager = ChatManager(
base_url="https://api.example.com/v1",
api_key="your_api_key",
model="deepseek-chat",
timeout=300 # Session timeout in seconds
)
# Get or create chat instance (maintains per-session history)
chat1 = manager.get_chat("user1")
chat2 = manager.get_chat("user2")
# Sessions are independent
chat1.chat("Hello from user1")
chat2.chat("Hello from user2")
# Remove a session
manager.remove_chat("user1")
Custom Configuration per Session
custom_config = LLMConfig(
base_url=base_url,
api_key=api_key,
model="deepseek-chat",
temperature=0.9,
system_prompt="You are a creative writer"
)
chat3 = manager.get_chat("user3", config=custom_config)
API Reference
LLMChat
| Method | Description |
|---|---|
ask(prompt, stream=False) | Single-turn conversation without history |
chat(prompt, stream=True) | Multi-turn conversation with history tracking |
update_system_prompt(prompt) | Update system prompt in-place |
clear_history() | Clear conversation history (keeps system prompt) |
history (property) | Get copy of current conversation history |
ChatManager
| Method | Description |
|---|---|
get_chat(chat_id, stream=True, config=None) | Get or create chat instance by ID |
remove_chat(chat_id) | Remove chat session |
Parameters
| Parameter | Default | Description |
|---|---|---|
base_url | required | API base URL (e.g., https://api.deepseek.com/v1) |
api_key | required | API authentication key |
model | "deepseek-chat" | Model name |
temperature | 0.7 | Controls randomness (0-2) |
max_tokens | 2048/8192 | Maximum output tokens |
top_p | 1.0 | Nucleus sampling (0-1) |
frequency_penalty | 0.0 | Reduces repetition (-2 to 2) |
presence_penalty | 0.0 | Encourages new topics (-2 to 2) |
system_prompt | "你是一个严谨的助手" | System message |
timeout | inf | Session timeout in seconds (ChatManager only) |
Files
1 totalSelect a file
Select a file to preview.
Comments
Loading comments…
