Llava Vision

PassAudited by ClawScan on May 10, 2026.

Overview

The skill appears to do what it says—analyze a chosen image with a local LLaVA server—but users should only provide files or URLs they intend to process.

This skill is reasonable to install if you intend to analyze selected images with your own local LLaVA/llama.cpp server. Before using it, make sure the local server is trustworthy and only provide image files or URLs you are comfortable having processed locally.

Findings (3)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

If the agent or user supplies the wrong path or URL, the skill could process an unintended local file or fetch an unintended remote/internal URL.

Why it was flagged

The tool can fetch a user-provided URL or read a user-provided local path. That is expected for this skill, but the artifacts do not restrict the input to image files or trusted locations.

Skill content
if (imagePathOrUrl.startsWith("http://") || imagePathOrUrl.startsWith("https://")) { const res = await fetch(imagePathOrUrl); ... } else { const buffer = await fs.promises.readFile(imagePathOrUrl); }
Recommendation

Use only trusted image files or URLs. A future version could add file-type, size, and path/domain validation or ask for confirmation before reading unusual local paths.

What this means

Images may be visible to the local llama.cpp server process and any logs or operators associated with that local service.

Why it was flagged

The full image content is base64-encoded and sent to a local model server. The destination is localhost and matches the skill description, but it is still a data transfer to another local process.

Skill content
const LLAMA_SERVER_URL = "http://127.0.0.1:8081/v1/chat/completions"; ... url: `data:image/jpeg;base64,${imageBase64}`
Recommendation

Only run this with a local server you trust, and avoid sending images that contain sensitive information unless you are comfortable with that local processing.

What this means

The safety and privacy of image processing also depend on the local llama.cpp server the user chooses to run.

Why it was flagged

The skill relies on a separately installed local llama.cpp server that is not included or installed by these artifacts. This is disclosed and expected, but the external server's provenance is outside this review.

Skill content
- Node.js (the skill itself)
- A local **llama.cpp** server with the LLaVA model exposed at the default endpoint.
Recommendation

Install llama.cpp and the LLaVA model from trusted sources, and verify that the local server is bound only as intended before using the skill.