Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

agent-native-architecture

v2.56.0

Build applications where agents are first-class citizens. Use when designing autonomous agents, MCP tools, or self-modifying agent-loop architectures.

0· 124·0 current·0 all-time
byIlia Alshanetsky@iliaal
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
Capability signals
CryptoCan make purchasesRequires sensitive credentials
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The name/description (agent-native architecture) match the content: an extensive instruction-only guide about designing autonomous and self-modifying agents. It requests no binaries, env vars, or installs, which is proportionate for a documentation skill.
!
Instruction Scope
SKILL.md and references instruct agents and architects to use primitives like write_file, run git/push, self_deploy, and to modify system prompts and agent code. Those instructions are expected for an architecture guide, but they expand the agent's actionable surface (file writes, code changes, system-prompt overrides). The file explicitly tells an agent to 'apply those patterns to the user's specific context' which could lead the agent to request or attempt access to arbitrary app state. The pre-scan flagged 'system-prompt-override' patterns — expected for this topic but a real risk if followed with elevated permissions.
Install Mechanism
No install spec and no code files to execute; this is instruction-only which is the lowest-risk install footprint.
Credentials
The skill declares no required env vars or credentials (proportional). However, many recommended patterns (git push, self_deploy, web fetch, context injection) implicitly require credentials and system access when implemented. The skill itself does not demand those secrets, but follow-on implementation would — review any integrations carefully.
!
Persistence & Privilege
While the skill does not set always:true and is user-invocable, its guidance normalizes giving agents the power to change system prompts, modify source (src/*.ts), commit/push, and self-deploy. If an agent implementing these patterns is granted write/git/push or permission to change system prompts, the blast radius is high. Ensure approval-gates, least privilege, and human-in-the-loop controls if you apply this guidance.
Scan Findings in Context
[system-prompt-override] expected: The content discusses system prompt design and self-modification; a system-prompt-override pattern detection is expected for an architecture document. However, such patterns are precisely what attackers try to exploit, so treat them as high-risk when enabling agent permissions that could execute these instructions.
What to consider before installing
This skill is documentation for building agent-native systems — the files appear coherent and relevant. It does not request credentials or install code by itself. However, the guide instructs patterns that let agents write files, modify system prompts, commit/push code, and self-deploy. Before installing or using this skill in a runtime that can act on your system: 1) Do NOT grant agents file-write, git push, or cloud credentials unless you have explicit approval workflows and human-in-the-loop gates; 2) Require explicit manual approval for any change to agent code or system prompts (the 'apply_pending' pattern is a good model); 3) Restrict any orchestrator or agent runtime so it cannot modify platform-level configs or other skills' prompts; 4) Audit and log all agent-initiated file/git operations and limit their scope (per-repo/per-path); 5) If you plan to run these patterns in production, perform a security review of the actual implementation (who supplies git tokens, where .env lives, CI hooks). This assessment would change to 'benign' if the skill remained purely documentation and you retained strict human approval + no agent write/push permissions; it would escalate toward 'malicious' if the skill included code that exfiltrated credentials or attempted to override system prompts autonomously.
!
references/action-parity-discipline.md:248
Prompt-injection style instruction pattern detected.
!
references/agent-execution-patterns.md:248
Prompt-injection style instruction pattern detected.
!
references/agent-native-testing.md:215
Prompt-injection style instruction pattern detected.
!
references/architecture-patterns.md:53
Prompt-injection style instruction pattern detected.
!
references/dynamic-context-injection.md:161
Prompt-injection style instruction pattern detected.
!
references/quick-start.md:31
Prompt-injection style instruction pattern detected.
!
references/refactoring-to-prompt-native.md:184
Prompt-injection style instruction pattern detected.
!
references/system-prompt-design.md:42
Prompt-injection style instruction pattern detected.
About static analysis
These patterns were detected by automated regex scanning. They may be normal for skills that integrate with external APIs. Check the VirusTotal and OpenClaw results above for context-aware analysis.

Like a lobster shell, security has layers — review code before you run it.

latestvk972sdsrvt5ph6fdktkx21h2a984v1rc

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments