Historical Guide
v1.1.8博物馆讲解器升级版:召唤李白、苏轼、孔子等历史人物讲解文物。支持「让李白讲这个」「换成苏轼」等自然对话,沉浸式了解文物本身。
⭐ 0· 87·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
medium confidencePurpose & Capability
The skill claims to produce persona-driven museum narration and the code implements that: it loads persona JSONs, calls an LLM endpoint (API_KEY/API_BASE/MODEL_NAME) to generate personas and explanations, and maintains session state. The required model API credentials and an optional scripts/config.json are proportionate to the stated purpose. Note: the registry metadata provided earlier claimed no required env vars and showed a malformed required config path ([object Object]), which is inconsistent with SKILL.md and the code.
Instruction Scope
The SKILL.md instructs the agent to call local Python scripts (tour_guide.py, persona_generator.py, etc.) and to store generated personas in references/*.json. The runtime behavior (calling the model API, saving generated persona JSON files, using subprocess to invoke the generator if a persona is missing) matches the instructions. This means the skill will send user prompts to the configured model endpoint and persist generated JSON files locally — expected for this feature, but worth noting.
Install Mechanism
There is no install spec; the skill is instruction/code-only and only requires the requests Python package (pip install requests). No remote downloads or archive extraction are performed, which keeps install risk low.
Credentials
The skill requests only model-related configuration (API_KEY, API_BASE, MODEL_NAME) via environment variables or scripts/config.json. Those are appropriate for a skill that makes LLM calls. No unrelated credentials, secrets, or system tokens are requested.
Persistence & Privilege
always:false (no forced always-on). The skill writes generated persona JSONs into references/ and may update local files (saving personas). It also uses subprocess to run persona_generator.py when needed. These behaviors are expected for persona generation but mean the skill persists artifacts on disk and can create new files in its directory — review these files if you care about local persistence or provenance.
Assessment
This skill appears to do what it says: it calls an LLM to produce persona-style museum commentary and will store generated persona JSON files under references/. Before installing or running it: 1) Ensure API_BASE points to a trusted model endpoint and only provide API_KEY to services you trust; anything sent to the model provider may be logged/retained by that provider. 2) Be aware that the skill will save generated persona files to the local references/ directory (persistence); inspect those files if you need to remove persisted content. 3) The package metadata in the registry incorrectly shows no required env vars / a malformed config path — rely on SKILL.md and scripts/api_config.py which expect API_KEY, API_BASE, MODEL_NAME or scripts/config.json. 4) Running the code executes subprocesses and network calls (requests) — run in an environment you control and audit the scripts if you require higher assurance. 5) If you plan to allow the skill to run autonomously, note it will invoke the configured model endpoint and may generate and store personas; consider limiting its privileges and using a least-privilege API key. If you want me to, I can list exact lines that perform network calls, file writes, and subprocess execution for manual review.Like a lobster shell, security has layers — review code before you run it.
latestvk979916442een4vcvrkb71pddn84cxk9
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
Runtime requirements
🎭 Clawdis
Config[object Object]
