meeting-minutes-qa-tts
Read meeting minutes, produce a short summary with the current conversation model, save the meeting text and summary into local memory, answer follow-up ques...
MIT-0 · Free to use, modify, and redistribute. No attribution required.
⭐ 0 · 17 · 0 current installs · 0 all-time installs
MIT-0
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
Name/description align with code: reading meeting text, saving local JSON memory, and calling a TTS API. However the registry declares no required environment variables or primary credential while the code and SKILL.md clearly require a SenseAudio API key (SENSEAUDIO_API_KEY). The metadata omission is an incoherence.
Instruction Scope
SKILL.md and scripts limit actions to reading a provided local file or URL, saving a local JSON memory file, and calling SenseAudio for TTS. The instructions do not ask for unrelated system data. They do instruct the agent to ask the user for an API key if missing and to prompt for output paths — both consistent with the intended workflow.
Install Mechanism
No install spec is present; this is instruction-plus-scripts only. All code is included in the repository (no remote downloads or installers). This is low-risk from an installation/execution-spec standpoint.
Credentials
The runtime requires SENSEAUDIO_API_KEY (and optionally SENSEAUDIO_DEFAULT_VOICE / SENSEAUDIO_TTS_MAX_CHARS) but the registry lists no required env vars. Requesting a network-accessible API key for the TTS provider is reasonable, but the metadata mismatch is confusing and could cause users to share a credential into chat instead of setting it locally. The skill also writes meeting text to local JSON (expected for functionality) — users should be aware this persists possibly sensitive text on disk.
Persistence & Privilege
always is false and the skill only writes its own local memory file and audio outputs under the skill workspace by default. It does read arbitrary user-supplied file paths (needed to read meeting notes). No system-wide changes or other skills' configs are modified.
Scan Findings in Context
[unicode-control-chars] unexpected: The pre-scan flagged unicode-control-chars in SKILL.md / included files. The repository contains BOM/garbled (mojibake) characters in meeting text and memory files which can trigger this rule. This may be an innocuous artifact of encodings, but prompt-injection detectors flagged it; it's worth a quick human check of SKILL.md and memory files for hidden control characters before trusting prompts or copy-pasting content.
What to consider before installing
Key points before installing or using this skill:
- The skill requires a SenseAudio API key at runtime (SENSEAUDIO_API_KEY) even though the registry metadata does not list it. Do not paste your API key into a chat prompt to satisfy the agent; set SENSEAUDIO_API_KEY as a local environment variable instead.
- The skill reads arbitrary local files or URLs you provide and writes the meeting text and summary into a local JSON (memory/latest_meeting.json by default). Do not point it at files that contain credentials or other sensitive data you don't want persisted to disk.
- The TTS call goes to https://api.senseaudio.cn/v1/t2a_v2. If you have any concerns about that provider, review network access policies or use a different TTS provider by modifying the scripts.
- The repo is instruction-only (no install downloads), but contains scripts that will write files under the skill workspace or user-specified output paths. Verify and sanitize output paths before generation to avoid unintended overwrites.
- The prompt-injection/unicode-control warning is most likely due to BOMs or mojibake in provided meeting files, but you should open SKILL.md and the included sample memory file to verify there are no invisible control characters or unexpected instructions embedded.
If you want to proceed: set the API key via environment variable, test with a non-sensitive sample meeting file, and review where the audio and memory files are written. If the metadata omission is a blocker, ask the skill author/registry to declare SENSEAUDIO_API_KEY as a required environment variable so the mismatch is explicit.Like a lobster shell, security has layers — review code before you run it.
Current versionv1.0.0
Download ziplatest
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
SKILL.md
Meeting Minutes QA TTS
Use this skill to read one meeting note, summarize it, save it to local memory, answer follow-up questions about that meeting, and convert both the initial summary and later answers into local audio files.
Trigger Rules
Use this skill when:
- The user wants to read one meeting note, save it for follow-up questions, and hear the summary as audio.
- The user wants both a text answer and an audio answer for a meeting-related question.
- The user wants the meeting note remembered for follow-up questions in the same workflow.
Do not use this skill when:
- The user only wants a one-shot summary audio without later Q and A.
- The user wants general knowledge unrelated to the meeting text.
- The user wants speech recognition from audio or video.
Workflow
- Look for one of these inputs in the conversation:
- direct meeting text
- a local text file path
- a readable URL
- If none is available, ask for the meeting text or a local file path or URL first.
- If no
SENSEAUDIO_API_KEYis configured in the environment, ask the user for a SenseAudio API key and point them tohttps://senseaudio.cn/docs/api-key. - Before generating the initial summary audio, ask the user where the mp3 file should be saved.
- Read the meeting text from the provided source.
- Summarize the meeting text in the current conversation model with a short spoken-style Chinese summary.
- Use
scripts/create_meeting_summary_audio_session.pyto save the source location, meeting text, and summary into local memory and generate the summary mp3 at the requested path. - When the user asks a follow-up question, answer using the saved meeting text and summary in the current conversation model.
- Output the text answer first in OpenClaw.
- Before generating the answer audio, ask the user where the answer mp3 file should be saved.
- Use
scripts/create_meeting_answer_audio.pyto convert the final answer text into an mp3. - After the text answer, explicitly tell the user where the generated audio file was saved.
Rules
- Keep the meeting memory local to this skill directory unless the user asks for a different path.
- Prefer an in-memory or local-JSON flow; do not require a database.
- Output the text answer first, then the generated audio file location.
- Ask for an output path before generating any mp3.
- Use the current conversation model for summarization and question answering.
- Use SenseAudio only for TTS.
- Accept a user-provided output path and write the mp3 there when requested.
Resource
- Memory helper:
scripts/meeting_memory.pyrelative to this skill directory - Memory saver:
scripts/save_meeting_memory.pyrelative to this skill directory - Summary session creator:
scripts/create_meeting_summary_audio_session.pyrelative to this skill directory - Answer audio creator:
scripts/create_meeting_answer_audio.pyrelative to this skill directory - Answer-to-audio script:
scripts/answer_meeting_question_audio.pyrelative to this skill directory - Summary-to-TTS script:
scripts/generate_summary_audio.pyrelative to this skill directory - Product brief:
PRD.mdrelative to this skill directory
Files
22 totalSelect a file
Select a file to preview.
Comments
Loading comments…
