Ollama Integration

v1.0.0

Integrate and run local Ollama AI models with custom prompts for AI assistance and automatic model management.

1· 547·6 current·6 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the code: the skill requires the 'ollama' library and exposes listing and generate/run functions for local Ollama models, which is exactly what the description promises.
Instruction Scope
SKILL.md is minimal and focused on local model integration; the implementation only calls ollama.list() and ollama.generate() and does not read arbitrary filesystem paths, environment variables, or contact external endpoints directly.
Install Mechanism
There is no explicit install spec. package.json declares a dependency on the npm package 'ollama', which is appropriate for this functionality, but package.json is written in a non-JSON style and may cause installation/runtime issues if the platform expects valid JSON or an install step. The skill does not pull code from arbitrary URLs or other high-risk sources.
Credentials
No environment variables, credentials, or config paths are requested. The skill's needs (a local Ollama client library) are proportionate to its stated purpose.
Persistence & Privilege
The skill does not request always:true or other elevated persistence, and it does not modify other skills or system-wide config. Default autonomous invocation is enabled (platform default) which is expected for skills.
Assessment
This skill appears to be what it claims: a thin wrapper around the 'ollama' client for local model usage. Before installing: 1) Confirm you run an Ollama service locally and are comfortable exposing prompts to it. 2) Ensure your platform will install npm dependencies or pre-install the 'ollama' package; note package.json appears malformed and may need fixing. 3) Review the third-party 'ollama' npm package source (and its network behavior) if you need to be extra cautious. 4) Don't provide secrets or sensitive data to models unless you trust the local environment and the model's handling of data.

Like a lobster shell, security has layers — review code before you run it.

aivk97bg49bt9dd1pc1v6vb5enzz981wn78latestvk97bg49bt9dd1pc1v6vb5enzz981wn78llmvk97bg49bt9dd1pc1v6vb5enzz981wn78localvk97bg49bt9dd1pc1v6vb5enzz981wn78ollamavk97bg49bt9dd1pc1v6vb5enzz981wn78

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments