Back to skill
Skillv1.0.0
ClawScan security
Data Labeling Studio · ClawHub's context-aware review of the artifact, metadata, and declared behavior.
Scanner verdict
SuspiciousApr 17, 2026, 6:52 AM
- Verdict
- suspicious
- Confidence
- medium
- Model
- gpt-5-mini
- Summary
- The skill's description promises a full multi‑modal labeling studio, but the provided files and runtime instructions are inconsistent (missing modules/scripts and unnecessary listed dependencies), so the package appears incomplete or misleading and deserves caution before installation.
- Guidance
- This package looks internally inconsistent rather than blatantly malicious: it promises a full multi‑modal 'labeling_studio' with many helper scripts and model integrations, but the archive only contains an image annotator script, a quality checker, example/test mocks, and a requirements.txt. Before installing or running anything: - Don't pip install the requirements into your main environment. Use a disposable virtualenv or container to avoid pulling heavy packages unnecessarily. - Inspect or run the included scripts locally to confirm behavior. The image annotator uses mocked/simulated annotations (random), not real models; 'active learning' appears not implemented here. - Be cautious that examples import a module (labeling_studio) that isn't included — this may mean the published bundle is incomplete or the real implementation is fetched from elsewhere (ask the author or source). If the package intended to download or fetch code at runtime, that would be higher risk — but no such downloader is present in the files. - If you need multi‑modal capabilities, request the missing source files or a packaged release (e.g., on GitHub) and verify the code that integrates models or remote endpoints. If you don't get clear answers, prefer an alternative with a complete source/release. Overall: don't run or install this in a production environment until the mismatches are resolved; treat it as incomplete/misleading and proceed in a sandbox if you want to experiment.
Review Dimensions
- Purpose & Capability
- concernThe skill claims multi‑modal support (image, text, audio, video) and an importable package 'labeling_studio', but the bundle only includes scripts for image annotation and quality checks. Several scripts referenced in SKILL.md (annotate_text.py, annotate_audio.py, annotate_video.py, export_dataset.py) and the labeling_studio module used in examples are not present. Declared requirements (librosa, OpenCV, Pillow, scikit‑learn) are heavier than what the included scripts actually use.
- Instruction Scope
- concernSKILL.md instructs running scripts and doing pip install -r requirements.txt which is expected, but many example commands and APIs reference missing files/modules (labeling_studio import, scripts that aren't in the manifest). The runtime instructions also enable 'active learning' and 'pre_annotate' but the included code only contains mock/simulated behavior rather than actual model integration — this is scope creep / mismatch between promised capabilities and real instructions.
- Install Mechanism
- noteThere is no formal install spec (instruction-only), which is low risk. However SKILL.md and README suggest running 'pip install -r requirements.txt' which will pull several heavy third‑party packages; because the project is incomplete, installing those deps may be unnecessary and should be done in an isolated environment if attempted.
- Credentials
- okThe skill requests no environment variables, no credentials, and no config paths. The code reads only local file paths supplied by the user. There is no evidence of attempts to access unrelated secrets or network endpoints in the provided files.
- Persistence & Privilege
- okThe skill is not always-enabled and does not request persistent system privileges or modify other skills. It does not include an installer that writes to system locations; it is run on demand as scripts.
