n8n
WarnAudited by ClawScan on May 10, 2026.
Overview
This is not a coherent n8n skill; it appears to be a mostly simulated agent-framework listing with mismatched identity, broad always-on/self-improvement claims, and unsupported safety assurances.
Do not install this as a trusted n8n integration. Ask the publisher to reconcile the listing identity, provide a verifiable source repository, and supply real implementation/tests for the claimed security gates and self-improvement behavior. If you experiment anyway, use an isolated environment and avoid providing API keys or sensitive data.
Findings (6)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
You may not be installing the tool you expected, and it is hard to verify who produced the package or whether it matches the listing.
The registry data presents this as Name n8n, slug brunosouto1108, version 1.0.0, while the packaged metadata identifies a different skill and version. Combined with unknown source and no homepage, this creates provenance and identity ambiguity.
"slug": "agentic-ai-gold", "version": "4.0.0"
Do not treat this as a verified n8n skill until the registry identity, package metadata, source, and publisher information are reconciled.
A user could rely on safety gates, tests, or resilience claims that are not actually implemented in the provided artifacts.
The example only prints simulated security status while the documentation presents '17 dharmic security gates' as operational protections. This can give users false confidence that real enforcement exists.
# Simulate council activation ... print(" ✓ ALL 17 GATES ACTIVE")Require real implementation code, tests, and documented enforcement behavior before trusting the advertised safety claims.
If implemented as described, it could keep operating or proposing changes outside the user’s immediate task boundaries.
The skill advertises autonomous background agents and self-improvement, but the artifacts do not define start/stop controls, scheduling, sandboxing, update scope, or how user approval is enforced.
Our 4-member Persistent Council runs 24/7 ... Runs overnight research cycles, identifies capability gaps, and proposes updates to itself.
Only enable any persistent or self-improving behavior after confirming explicit start/stop controls, review-before-change guarantees, logging, and rollback.
Running the installer may bring in changing third-party code that was not reviewed in these artifacts.
The manual installer would fetch unpinned third-party packages and continue even if installation fails. This is purpose-aligned for an agent framework, but reduces reproducibility and reviewability.
pip install -q langgraph openai-agents crewai pydantic-ai mem0 zep-python 2>/dev/null || true
Pin dependency versions, avoid suppressing install errors, and review dependencies before running the script.
Supplying this key would grant the skill or related code access to your OpenRouter account quota and billing context.
The skill suggests an optional model-provider API key, which is expected for model fallback, but the registry metadata declares no credentials or environment variables.
export OPENROUTER_API_KEY=your_key_here
Use a scoped/revocable API key if testing, and do not provide credentials until the package identity and implementation are verified.
Sensitive prompts or project context could be retained or reused if the claimed memory system is later implemented.
The advertised memory architecture implies persistent reuse of user/context data, but the artifacts do not specify storage location, retention, deletion, or exclusion rules.
Working → Semantic → Episodic → Procedural → Meta-Cognitive ... Agents that remember how they learned, not just what.
Clarify memory storage, retention, deletion, and opt-out behavior before sharing sensitive data with this skill.
