RamaLama CLI
PassAudited by ClawScan on May 1, 2026.
Overview
This is a transparent RamaLama CLI helper; its notable risks are expected for running AI models, especially detached services, RAG over local files, and remote endpoints.
This skill appears appropriate if you want the agent to use RamaLama. Before installing or invoking it, decide which models and endpoints are trusted, avoid broad RAG paths over private files, and require confirmation for detached servers, remote endpoints, push/rm operations, or any workflow involving sensitive data.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
You are relying on the package manager’s RamaLama package and its model-source ecosystem rather than reviewed code bundled with the skill.
The skill installs an external CLI package through standard package managers, but the registry metadata does not provide a source or homepage and does not pin an exact package version.
Source: unknown; Homepage: none; install: brew formula ramalama / uv formula ramalama
Install only from trusted package repositories and verify the RamaLama package/source if provenance matters for your environment.
An agent using this skill could run model-management commands that change local model storage or interact with registries when requested.
The skill exposes broad RamaLama lifecycle operations, including pulling/pushing models and removing items, which are purpose-aligned but can mutate local or registry state if used.
Inspect/source lifecycle operations: `inspect`, `pull`, `push`, `convert`, `list`, `rm`
Review and approve commands that push, remove, convert, or otherwise mutate model state, especially outside a local test environment.
Files or URLs added to a RAG bundle may influence later model answers and could include private data if broad paths are selected.
The RAG workflow can package local files or URL content into a reusable knowledge bundle, which is expected for the feature but creates retained context that may include sensitive or untrusted material.
Build knowledge bundle from files/URLs: `ramalama rag <paths...> <destination>`
Use narrow, intended paths; avoid secrets; treat URL content as untrusted; and delete RAG bundles when they are no longer needed.
Prompts, including any sensitive content provided by the user, could be sent to a local or remote endpoint chosen at runtime.
The skill supports sending prompts to an arbitrary existing model endpoint; this is part of its purpose, but endpoint trust and data boundaries are not defined in the artifact.
Query an existing endpoint: `ramalama chat --url <url> "<prompt>"`
Use only trusted endpoints, prefer localhost for sensitive data, and do not send private content to unknown URLs.
A model service may continue consuming resources or accepting requests until it is explicitly stopped.
The documented detached service recipe can leave a model server running beyond the immediate command, even though this is disclosed and aligned with the skill’s serving purpose.
ramalama serve -d granite3.3:2b
Confirm before starting detached services, bind/listen only where intended, choose ports deliberately, and stop services after use.
