unisound-diagnosis-review

WarnAudited by ClawScan on May 15, 2026.

Overview

This medical-record audit skill is mostly purpose-aligned, but it needs review because it embeds/underdeclares internal database credential handling while sending sensitive case text to a model service by default.

Before installing, confirm no real database secret is packaged, move database and model credentials to a managed secret mechanism, and verify the shared preprocessing code. De-identify medical records and use `--no-llm`/`use_llm=false` when records must remain local. Avoid `--save-prepared` or output-file options unless local storage of record-derived content is approved.

Findings (4)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

A bundled or hardcoded database credential could grant access to an internal rules database outside the user's expected permission boundary and would be difficult to rotate or scope safely.

Why it was flagged

The source embeds internal database connection details and a password field, and the static scan separately flagged a hardcoded secret in this file. Even if the visible value is a placeholder, the design relies on hardcoded database credentials rather than declared, scoped, rotatable configuration.

Skill content
HARDCODED_DATABASE = DatabaseSettings(
    host="10.10.20.15",
    port=15432,
    name="medical_coding_auditdb",
    user="audituser",
    password="REPLACE_WITH_STRONG_PASSWORD",
)
Recommendation

Remove hardcoded database credentials, use environment variables or a secret manager, declare the required credential in metadata, and ensure the database account is read-only and rotated if any real secret was packaged.

What this means

Medical record text may be transmitted to the configured model provider unless LLM use is disabled, so patient data must be de-identified and the provider boundary must be acceptable.

Why it was flagged

The skill explicitly sends the review prompt to an internal medical model by default, using a caller-provided bearer appkey, with an offline fallback available.

Skill content
默认使用内部医疗大模型生成审核说明;鉴权 `appkey` 必须由调用方传入。如需完全离线规则回退,可传 `use_llm=false`。
Recommendation

Use `use_llm=false` or `--no-llm` for records that cannot leave the local environment, and de-identify records before enabling the LLM path.

What this means

Actual handling of PDF/DOC/XLS and other non-JSON files depends on external shared code not visible in these artifacts.

Why it was flagged

The CLI imports a shared preprocessing module outside the supplied file manifest. This is aligned with document ingestion, but that helper code is not included in the reviewed package.

Skill content
PREPROCESS_DIR = SKILLS_ROOT / "_shared" / "doc-preprocess" / "scripts"
...
from preprocess import PreprocessError, SUPPORTED_FILE_TYPES, detect_input_type, load_input_artifact
Recommendation

Verify the shared preprocessing package and dependency versions in the target environment before processing sensitive records.

What this means

If users enable these options, audit results or prepared medical record text can be written to local disk despite the broad no-persistence statement.

Why it was flagged

The privacy section says the skill does not persist local data, while later options document user-directed local saves of output JSON and prepared medical text. The save behavior is disclosed, but the privacy statement should be read with that exception.

Skill content
不做本地持久化 ... `--output-json PATH`:可选。保存响应 JSON ... `--save-prepared`:可选。保存预处理后的病历文本
Recommendation

Clarify the documentation to say no local persistence occurs unless output or save flags are used, and avoid those flags for sensitive records unless the destination is approved.