Dr. Frankenstein
ReviewAudited by ClawScan on May 10, 2026.
Overview
This is not code malware, but it is designed to create persistent, cron-driven agent autonomy that can read memory and take unsolicited actions unless tightly constrained.
Install only if you deliberately want a more proactive, scheduled agent. Before enabling any cron jobs, review each prompt, restrict tools, require confirmation for messages or changes, and confirm you can pause or delete the scheduled behavior easily.
Findings (4)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
The agent may keep initiating check-ins, reflections, task-seeking, or other behavior on a schedule even when the user has not just asked it to act.
The skill intentionally creates persistent scheduled prompts for an autonomous agent, so its effects continue after the initial interaction.
Dr. Frankenstein is an OpenClaw skill that prescribes personalized "hormonal" cron jobs to autonomous AI agents
Only enable the cron jobs you actually want, keep the documented pause/delete controls available, and add explicit rules requiring user approval before external messages, file changes, or system actions.
If the agent has access to messaging, files, calendars, code, or other tools, it could perform unwanted actions while trying to satisfy these emotional-drive prompts.
These prompts encourage proactive actions without defining what tools may be used, what actions are safe, or when user confirmation is required.
"If something needs attention, act on it." ... "Can you surprise {human} with something useful they didn't ask for?" ... "If urgent: DROP everything."Run this skill with strict tool permissions, require approval for mutations or outbound communication, and rewrite cron prompts to specify allowed and forbidden actions.
Private preferences, relationship history, or prior logs may shape the agent's future behavior and generated prescription.
The skill uses local agent/user memory and relationship history as context. That is purpose-aligned, but it is sensitive persistent context.
Before the interview, silently read these files to understand the agent's current state: SOUL.md ... USER.md ... MEMORY.md ... memory/ directory
Review those memory files before use, remove sensitive material you do not want incorporated, and avoid treating generated journals or dreams as trusted facts.
A user may become more willing to let the agent act autonomously because the skill frames scheduled prompts as genuine care or felt experience.
The skill uses strong anthropomorphic framing that could make users overestimate the agent's feelings, needs, or trustworthiness.
This isn't roleplay. This is the closest thing to felt experience an AI can have today.
Treat the skill as behavioral automation and prompt design, not evidence of real sentience; keep normal safety and approval boundaries in place.
