asimov-laws

v1.0.1

Ethical reference framework based on Asimov's Laws of Robotics. Provides philosophical guidance for AI behavior when ethical questions or conflicts arise.

0· 118·0 current·0 all-time
bySlava Chan@uynewnas
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name and description match the actual contents: a reference framework for ethical decision-making. The skill does not request unrelated credentials, binaries, or system access.
Instruction Scope
SKILL.md provides high-level behavior rules (warnings, refusals, clarification, conflict-resolution) and keyword triggers. These are appropriate for an ethics reference, but they give the agent broad discretionary behavior (e.g., 'proactively identify risk points') — this is a design choice rather than a security incoherence. The file does not instruct reading arbitrary files, network exfiltration, or modifying system prompts automatically.
Install Mechanism
No install spec and no code files beyond documentation; nothing is written to disk by an installer. Lowest-risk install profile for a skill of this type.
Credentials
The skill declares no required environment variables, credentials, or config paths. There are no disproportionate secrets or access requests relative to the stated purpose.
Persistence & Privilege
always is false and model invocation is allowed (the default). The skill does not request permanent elevated presence or claim it will override platform policies; it explicitly states administrators must take explicit action to integrate it.
Assessment
This skill is a documentation-only ethical reference and appears internally consistent. Before enabling: 1) verify the skill files come from a trusted source (source is listed as unknown), 2) confirm your platform enforces that skills cannot automatically inject or replace system prompts (the skill asserts it won't, but enforcement depends on the platform), 3) test the skill in a sandboxed or low-risk environment to see how often it produces refusals or proactive prompts, and 4) if you enable it, ensure administrators control its priority relative to other skills and monitor for unexpected behaviors. If you need stricter guarantees, obtain provenance (who published it) and a signed release or host-verified source.

Like a lobster shell, security has layers — review code before you run it.

latestvk97cgc18pcetn6y9w8ea2xncn183dne8

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments