AI Engineer
PassAudited by ClawScan on May 1, 2026.
Overview
This is a coherent instruction-only AI engineering guide, but its examples use provider credentials, tool-calling, persistent memory/vector stores, and sub-agent delegation that should be scoped carefully in real implementations.
This skill appears safe to install as documentation-only guidance. Before applying its examples, decide what data may be sent to LLM providers, what prompts/documents may be logged or stored, how long memories/vector indexes are retained, and which tool or sub-agent actions require explicit user approval.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If implemented carelessly, an agent could call the wrong tool or pass unsafe arguments, especially for tools that modify data or external services.
The example lets model-selected tool calls supply handler names and arguments. This is expected in agent-building guidance, but unsafe if copied without validation or approval controls.
fn_name = call.function.name fn_args = json.loads(call.function.arguments) result = tool_handlers[fn_name](**fn_args)
Whitelist tool names, validate schemas and argument ranges, cap iterations, and require user approval before irreversible or account-changing tool actions.
Using provider API keys may incur cost and give the implementation access to send selected data to the provider.
The RAG example uses an OpenAI API key from the environment. This is purpose-aligned and not hardcoded, but it is still credential-backed provider access.
ef = OpenAIEmbeddingFunction(api_key=os.environ["OPENAI_API_KEY"], model_name="text-embedding-3-small")
Use project-scoped or least-privileged keys, keep them in a secret manager or environment variables, monitor usage, and avoid sending sensitive data unless intended.
Private details or incorrect instructions stored in memory could be reused in later sessions and affect future answers.
The reference describes retrieving stored memories into the prompt and saving task summaries for later use. This is normal for agent memory patterns, but persistent memories can retain sensitive information or influence future behavior if polluted.
memories = memory_store.search(query=user_message, limit=3)
system_prompt = f"Relevant context:\n{memories}\n\n{base_system_prompt}"
...
memory_store.save(summary, metadata={"timestamp": now, "task": task_type})Limit what is stored, label memory provenance, support deletion/expiration, redact sensitive data, and treat retrieved memories as untrusted context.
A sub-agent may receive more project or user context than needed if delegation is not scoped.
The skill suggests sub-agent delegation and passing context to another agent. This fits the stated agent-engineering purpose, but it creates a context-sharing boundary that should be controlled.
When a task is too big for one context, spawn a sub-agent: # In OpenClaw: use sessions_spawn with runtime="subagent" # Pass specific task + relevant context
Pass only the minimum necessary context, define the sub-agent task narrowly, verify the sub-agent identity/capabilities, and avoid sharing secrets or unrelated private data.
