Agents

Design, build, and deploy AI agents with architecture patterns, framework selection, memory systems, and production safety.

MIT-0 · Free to use, modify, and redistribute. No attribution required.
2 · 711 · 7 current installs · 7 all-time installs
byIván@ivangdavila
MIT-0
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the contents: the package is documentation and checklists for designing, implementing, evaluating, and securing agents. It requests no binaries, env vars, or installs—appropriate for an instructional skill. Note: the skill metadata has no homepage and owner is an opaque ID; that reduces provenance but does not create an internal inconsistency.
Instruction Scope
SKILL.md and the included markdown files are guidance only and do not instruct the agent to read arbitrary system files, exfiltrate data, or call external endpoints. The instructions focus on architecture, testing, and security practices (including avoiding prompt injection). There is no vague open-ended instruction that would grant broad discretionary access.
Install Mechanism
No install spec and no code files — lowest-risk model. Nothing is downloaded or written to disk by the skill itself.
Credentials
The skill declares no required environment variables, credentials, or config paths. The guidance even warns against putting secrets in prompts and advises retrieving secrets from environment variables without exposing them. No disproportionate credential requests are present.
Persistence & Privilege
Defaults for invocation/persistence are normal (always:false, agent can invoke autonomously). The skill does not request persistent system presence or modification of other skills or global agent config. Nothing indicates privilege escalation.
Scan Findings in Context
[ignore-previous-instructions] expected: The phrase/pattern was detected in the content but appears in the security guidance (discussing persona hijacking and prompt injection). Presence is expected and used as an example of what to detect/defend against, not as an instruction to ignore prior constraints.
Assessment
This skill is documentation-only and internally coherent with its aim to teach how to design, implement, and secure agents. Before installing or relying on it: (1) note the publisher has no public homepage—if provenance matters, prefer skills with identifiable authors or an organization; (2) the skill contains code snippets and operational advice but will not run code or access secrets by itself—if you implement the patterns, follow the security.md checklist (sandbox tools, avoid putting secrets in prompts, require approvals for destructive actions); (3) the prompt-injection pattern found is part of the security discussion and not an active exploit, but remain cautious: never paste sensitive keys into any prompt or untrusted context and sandbox any code you copy from the implementation examples.

Like a lobster shell, security has layers — review code before you run it.

Current versionv1.0.0
Download zip
latestvk97491205a3steds36m8e729gs813h7b

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

When to Use

Use when designing agent systems, choosing frameworks, implementing memory/tools, specifying agent behavior for teams, or reviewing agent security.

Quick Reference

TopicFile
Architecture patterns & memoryarchitecture.md
Framework comparisonframeworks.md
Use cases by roleuse-cases.md
Implementation patterns & codeimplementation.md
Security boundaries & riskssecurity.md
Evaluation & debuggingevaluation.md

Before Building — Decision Checklist

  • Single purpose defined? If you can't say it in one sentence, split into multiple agents
  • User identified? Internal team, end customer, or another system?
  • Interaction modality? Chat, voice, API, scheduled tasks?
  • Single vs multi-agent? Start simple — only add agents when roles genuinely differ
  • Memory strategy? What persists within session vs across sessions vs forever?
  • Tool access tiers? Which actions are read-only vs write vs destructive?
  • Escalation rules? When MUST a human step in?
  • Cost ceiling? Budget per task, per user, per month?

Critical Rules

  1. Start with one agent — Multi-agent adds coordination overhead. Prove single-agent insufficient first.
  2. Define escalation triggers — Angry users, legal mentions, confidence drops, repeated failures → human
  3. Separate read from write tools — Read tools need less approval than write tools
  4. Log everything — Tool calls, decisions, user interactions. You'll need the audit trail.
  5. Test adversarially — Assume users will try to break or manipulate the agent
  6. Budget by task type — Use cheaper models for simple tasks, expensive for complex

The Agent Loop (Mental Model)

OBSERVE → THINK → ACT → OBSERVE → ...

Every agent is this loop. The differences are:

  • What it observes (context window, memory, tool results)
  • How it thinks (direct, chain-of-thought, planning)
  • What it can act on (tools, APIs, communication channels)

Files

7 total
Select a file
Select a file to preview.

Comments

Loading comments…