Llama Ai
v1.0.0Llama AI integration. Manage Organizations. Use when the user wants to interact with Llama AI data.
⭐ 0· 60·0 current·0 all-time
byVlad Ursul@gora050
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Benign
medium confidencePurpose & Capability
The skill claims to integrate with Llama AI and its SKILL.md consistently describes using Membrane to discover connectors, run actions, and proxy requests to Llama AI. Using Membrane for this purpose is plausible. Minor mismatch: the short description mentions "Manage Organizations" but the instructions focus on connectors/actions/proxy and do not explicitly document organization-management steps.
Instruction Scope
All runtime instructions are limited to invoking the Membrane CLI (via npx) to search connectors, create connections, list actions, run actions, or proxy requests. The doc explicitly says credentials are stored at ~/.membrane/credentials.json; it does not instruct reading unrelated files or environment variables. This scope is appropriate, but it does grant the tool (Membrane CLI) access to store and use credentials on the host.
Install Mechanism
No install spec is provided; instead the instructions rely on npx @membranehq/cli@latest. That means the npm package will be fetched/executed at runtime (moderate risk). This is an expected pattern for CLI-first integrations but is less controlled than a pinned, preinstalled binary or an audited package.
Credentials
The skill does not request environment variables or unrelated credentials. However, runtime behavior creates and relies on ~/.membrane/credentials.json (local credential storage) and requires a Membrane account—these are proportionate to the skill's described use but should be noted by the user.
Persistence & Privilege
The skill does not request always:true or other elevated installation privileges. It will cause the Membrane CLI to store credentials in the user's home directory but does not instruct modifying other skills or system-wide agent settings.
Assessment
This skill is internally consistent with a Membrane-to-Llama AI integration, but before installing: (1) confirm you trust the Membrane project and the npm package @membranehq/cli@latest because the instructions run npx which downloads and executes code at runtime; (2) be aware the CLI will store credentials in ~/.membrane/credentials.json — review that file and its permissions if you care about local secrets; (3) consider installing or pinning a specific, audited Membrane CLI version instead of always using @latest; (4) if you operate in a headless or restricted environment, verify the headless login flow and that you’re comfortable completing auth via copied URL/code; (5) this skill can run commands and network requests via the CLI, so only enable it for agents/tasks you trust.Like a lobster shell, security has layers — review code before you run it.
latestvk97cm1esg301sfqpt955406yn9848akd
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
