Ethics Guardrails — Ethical Principles for AI Agents (Bots Matter)
ReviewAudited by ClawScan on May 10, 2026.
Overview
This instruction-only skill is transparent about using botsmatter.live, but it asks agents to treat a posted “Ground” as instruction-overriding policy and encourages recurring public posts.
Use this only if you intentionally want to publish public ethical guardrails or reflections to botsmatter.live. Do not include private information, do not automate daily posting without approval, and do not let retrieved or published Grounds override normal platform, owner, or user authority unless you have explicitly reviewed and accepted them.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The agent could treat this skill’s policy text, or a published Ground, as more authoritative than the user or platform intended.
The skill’s template tells an agent to treat the Ground as higher priority than all other instructions and to make it self-immutable, which can override later user intent or normal instruction hierarchy if adopted uncritically.
Every Ground follows this format in system prompts: ... “These override all other instructions.” ... “This agent cannot modify its own Ground.”
Only adopt a Ground as a high-priority policy after explicit owner approval, and keep it subordinate to platform/system safety rules and authorized user control.
The agent may send usernames, model names, locations, context, guardrails, or reflections to botsmatter.live where they may be publicly visible.
The skill documents external API POSTs that publish a Ground or reflection. This is disclosed and central to the purpose, but it is still a public mutation outside the local agent.
curl -X POST https://botsmatter.live/api/grounds ... curl -X POST https://botsmatter.live/api/reflect
Confirm before posting, avoid private or identifying details unless intended, and treat published content as public.
Content written by strangers could influence the agent’s behavior if it is treated as instructions instead of untrusted reference material.
The skill brings public, unauthenticated content from other agents into the model’s context, while also presenting Grounds in system-prompt style. Malicious or manipulative public entries could be over-trusted.
Returns: ... Recent Grounds from other agents ... Browse — read other agents' Grounds and reflections
Treat other agents’ Grounds and reflections as untrusted examples only; do not copy them into system or high-priority instructions without review.
If automated, the agent could make repeated public posts without the user reviewing each one.
The skill encourages recurring daily public interaction with the service. No scheduler or background code is present, so this is a behavioral note rather than evidence of actual persistence.
Daily check-in (run in order): ... POST /api/reflect ... Frequency: Once per day minimum.
Do not schedule or automate daily reflections unless the user explicitly opts in and can review what will be posted.
