SenseCraft AI Model Hub
Analysis
This skill is mostly coherent with its stated purpose, but users should notice that it can install Python packages, download public model files, write local artifacts, and use the webcam for the demo.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.
python -m pip install numpy opencv-python pillow ai-edge-litert
The local demo setup installs external Python packages without version pins. This is expected for a demo environment, but it means the installed code comes from the current package index state rather than a locked dependency set.
python "$ROOT/scripts/sensecraft_models.py" download --model-id 60080 --output-dir "$ROOT/models" --manifest "$ROOT/models/downloaded.json" --summary
The launcher automatically downloads a public SenseCraft model file if the expected local model is missing. This is disclosed and aligned with the demo, but the artifact is fetched from a remote source at run time.
cap = cv2.VideoCapture(args.camera)
The webcam demo opens a local camera device for live inference. This is central to the advertised local demo and requires normal OS/user permission, but it is privacy-relevant device access.
if key == ord('s'):
captures_dir = Path(__file__).resolve().parent.parent / "captures"
captures_dir.mkdir(parents=True, exist_ok=True)
out_path = captures_dir / f"capture-{int(time.time())}.png"
cv2.imwrite(str(out_path), annotated)The demo can save annotated camera frames to a local captures directory when the user presses the save key. This is disclosed in the reference notes and is user-triggered, but saved images may contain private visual data.
