unisound-followup-record

WarnAudited by ClawScan on May 15, 2026.

Overview

The skill’s medical-record structuring purpose is coherent, but it handles sensitive health records through an LLM while its privacy and no-persistence promises are not fully supported by the provided artifacts.

Use this skill only if you trust the publisher, the hivoice model endpoint, and the shared preprocessing dependency. Manually remove names, IDs, phone numbers, addresses, and other identifiers before use; protect the app key; avoid save-prepared/output files unless you can secure them; and verify the generated structured fields against the original medical record.

Findings (6)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Identifiable health information could be sent to the model provider if the input record has not already been manually de-identified.

Why it was flagged

The visible code embeds the record text into prompts and posts those prompts to the configured chat-completions endpoint. For medical records, this is sensitive data handling, and the provided code path does not show de-identification before the API call.

Skill content
DEFAULT_LLM_BASE = "https://maas-api.hivoice.cn/v1"; chunk_prompt = ... .format(record); "messages": [{"role": "user", "content": prompt}]; response = _http_post(url, payload, headers, timeout=timeout)
Recommendation

Only submit records you are authorized to process, remove identifiers before use, and require the publisher to implement and document explicit redaction plus provider retention boundaries.

What this means

A user may overtrust the privacy statement and provide raw medical records or save sensitive intermediate text without realizing the practical data exposure.

Why it was flagged

The privacy section makes strong assurances about de-identification and no local persistence, while the same documentation exposes an option to save preprocessed text and the code artifacts show remote LLM use.

Skill content
“严格脱敏:在发送至任何模型/接口前...脱敏”; “不做本地持久化:不将用户输入与中间结果写入本地...”; “--save-prepared:可选:保存预处理后的文本”
Recommendation

Clarify that remote processing occurs, mark save-prepared/output files as potentially sensitive, and avoid claiming strict de-identification unless it is implemented and verifiable.

What this means

If the app key is exposed or sent to an untrusted base URL, someone else could use the user’s model access.

Why it was flagged

The skill requires an API credential for the medical model. This is expected for the integration, but users should treat the key as sensitive.

Skill content
--appkey STRING:必填。内部医疗大模型鉴权 key,使用 Bearer 方式认证。
Recommendation

Use a dedicated, least-privileged key; pass it via a secure secret mechanism where possible; and keep the default/base URL trusted.

What this means

The skill’s file parsing behavior depends on whatever shared helper is present locally, which may not have been reviewed with this skill.

Why it was flagged

Runtime behavior depends on a shared preprocessing module that is not included in this package’s file manifest, although it is disclosed in SKILL.md.

Skill content
PREPROCESS_DIR = SKILLS_ROOT / "_shared" / "doc-preprocess" / "scripts"; sys.path.insert(0, str(PREPROCESS_DIR)); from preprocess import ...
Recommendation

Install the shared preprocessing dependency only from a trusted source, pin or version it, and review it before using the skill on sensitive files.

What this means

Users may have difficulty confirming which publisher or package identity they are trusting.

Why it was flagged

The embedded metadata identifies a different owner/slug than the registry information shown for this review, creating a provenance inconsistency.

Skill content
"ownerId": "kn76wejkeqxfc03j0rfxp2jaj982m7aa", "slug": "doctor.emr-gen.followup-record"
Recommendation

The publisher should align registry and embedded metadata; users should verify the source before processing medical data.

What this means

A malicious or malformed record could cause incorrect structured fields to be returned.

Why it was flagged

The input record is placed directly into the LLM prompt. If a document contains adversarial instructions, the model may treat them as instructions rather than data.

Skill content
chunk_prompt = """给定下面的病历文本... 输入:\n{}\n\n输出:""".strip().format(record)
Recommendation

Treat outputs as draft extractions, validate them against the source record, and harden prompts by clearly delimiting input as untrusted data.