Ask Council

v1.0.4

Ask LLM Council a question directly from Telegram/chat — get the chairman's synthesized answer without opening the web UI. Quick, headless access to multi-mo...

0· 592·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
medium confidence
Purpose & Capability
The skill's name/description (quick headless access to an LLM Council) matches the behavior in ask-council.sh: it creates a conversation, starts a run, polls until completion, and prints the chairman's answer. However, the script depends on curl and python3 (used for JSON parsing) and expects network access to localhost ports 8001/5173, yet the skill declares no required binaries or environment variables — this mismatch is operationally important though not malicious.
Instruction Scope
SKILL.md instructs the agent to run the included shell script with the user's question. The script only interacts with a local backend (API_BASE=http://127.0.0.1:8001), polls for status, and prints a local web-UI link; it does not reach out to arbitrary external hosts or attempt to read unrelated files or secrets.
Install Mechanism
This is an instruction-only skill with a bundled script (no install spec). That is low risk, but because the script will be executed on the host, users should confirm the script's contents (which were provided) and be aware there is no automated package installation. The SKILL.md references an external repo and a /install-llm-council step, but the skill package does not provide or install the backend.
Credentials
The skill requests no environment variables or external credentials and the script does not attempt to access secrets or unrelated config paths. Its only system interactions are network calls to localhost and a call to hostname -I to display a local IP.
Persistence & Privilege
always is false and the skill does not request persistent system privileges or modify other skills or system-wide settings. It runs a single script per invocation and creates ephemeral conversations on the backend.
Assessment
This skill is consistent with its stated purpose and appears to be low-risk, but check these before installing: 1) The script expects curl and python3 to be available — the skill metadata does not declare these; ensure your environment has them. 2) The script talks to a backend on localhost (127.0.0.1:8001) and constructs a local web-UI URL (port 5173). Only use this if you trust/run the LLM Council backend on your machine or network. 3) The package does not install the backend; SKILL.md suggests running /install-llm-council separately — confirm where that installer comes from. 4) The included script is small and readable (no obfuscated code), but because it will execute locally, review it yourself or run it in a safe environment if you are unsure. If you need higher assurance, ask the author for an explicit list of required binaries and a documented backend install procedure; resolving those would raise confidence to high.

Like a lobster shell, security has layers — review code before you run it.

latestvk97bywcdeq8bq50grr0np91b1d81mbzd

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments