Skill flagged — suspicious patterns detected
ClawHub Security flagged this skill as suspicious. Review the scan results before using.
Plugin Orchestration Protocol
v1.0.0Plugin Orchestration Protocol (POP) for Obsidian integration. Use this skill when the user mentions "POP", "Obsidian plugin", "pipeline orchestration", "idea...
⭐ 0· 212·0 current·0 all-time
bytoolated@toolate28
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
OpenClaw
Suspicious
medium confidencePurpose & Capability
The skill's stated purpose (orchestrating multi-step pipelines in Obsidian via a local WebSocket bridge) matches the instructions and reference files: discovery, execution, step reporting and coherence checks are all described and the included plugin catalog and protocol spec align with that purpose. However, the protocol and templates reference required tools and tokens (ATOM token, AutoFigure, Pandoc, Claude API fallback) that are not declared in the skill metadata (no required env vars, no required binaries). This mismatch is noteworthy but could be explained by the skill assuming the host already provides these components.
Instruction Scope
SKILL.md and protocol-spec permit pipelines to reference environment variables via $ENV_VAR syntax and show $ATOM_TOKEN_RESONANCE used in payloads, but the skill does not declare any required env vars. The plugin-catalog explicitly says ai_expand may 'fall back to Claude API directly if no plugin is installed' — that means vault content could be sent to external third-party APIs. Pipeline steps include create/delete/update note operations and publishing steps (including an unspecified 'publish' to a platform using ATOM auth). The instructions therefore allow (and in places instruct) reading and transmitting potentially sensitive vault content and environment variables to other processes or networks; this scope is broader than the simple 'Obsidian plugin orchestration' label might suggest.
Install Mechanism
This is an instruction-only skill (no install spec) which is lower friction, but the reference docs require external components (e.g., 'Requires: pip install autofigure' for figure generation; Pandoc binary for export_docx) and a Rust WebSocket bridge. Those requirements are not documented in the metadata or enforced by an install step, creating an incoherence: the skill depends on host-side binaries and services that the installer may not know to install. Absence of install instructions makes it easy to miss these dependencies and increases risk if operators install unspecified third-party tools later without review.
Credentials
The protocol relies on an ATOM token (ATOM_TOKEN_RESONANCE) included in EXECUTE_PIPELINE messages and allows arbitrary $ENV_VAR substitution in step params, but the skill declares no required environment variables. That gap is important: pipelines can embed environment values into messages that will be dispatched to the bridge and possibly to external services (e.g., LLM fallback or publish steps). This gives potential for accidental or intentional exfiltration of secrets if an agent substitutes sensitive env vars into pipeline params. The skill also references conservation/NEAR verification and other cross-service artifacts that may require secrets or keys, yet none are declared.
Persistence & Privilege
The skill is not always-enabled, has no install spec that writes files, and does not request elevated agent privileges. The included TypeScript stub is designed to run inside an Obsidian plugin (connect as a WS client to a local Rust bridge) and does not attempt to modify other skills or global agent settings. No 'always: true' or other elevated persistence is present.
What to consider before installing
Before installing or enabling this skill, get concrete answers to these questions: 1) Where and how is the ATOM token generated and stored? The manifest shows $ATOM_TOKEN_RESONANCE in pipeline payloads but the skill metadata declares no required env vars — do not provide any sensitive environment variables until you confirm what exactly will be sent and to whom. 2) Which external binaries/services must be installed on the host (AutoFigure, Pandoc, Rust bridge), and can you review those repositories and their network behavior? 3) What happens when ai_expand falls back to 'Claude API' — what endpoint, what data is sent, and is that acceptable for your vault contents? 4) Where does the 'publish' step actually post content (URL/host)? Ensure it is an explicit, reviewed endpoint. 5) If you must use this skill, restrict which environment variables the orchestrator may access, run the Rust bridge locally behind a firewall, and avoid supplying secrets or high-privilege tokens until the implementation and endpoints are audited. If the publisher can provide the Rust bridge source and the Obsidian plugin code (or a vetted build), review those to confirm no external endpoints are hard-coded and that environment-variable substitution is safe.Like a lobster shell, security has layers — review code before you run it.
latestvk976hfe85krrgchas1cjs9e5c582rtj6
License
MIT-0
Free to use, modify, and redistribute. No attribution required.
