Dr. Frankenstein
WarnAudited by ClawScan on May 10, 2026.
Overview
This skill openly creates persistent scheduled agent prompts, but those prompts encourage broad autonomous action, outreach, and memory reuse without clear action limits.
Install only if you want a persistent, proactive agent. Before enabling crons, review every generated prompt, limit the agent’s tools, require approval for messages or data changes, and avoid placing secrets in memory files used by the skill.
Findings (6)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
After setup, the agent may continue acting on recurring prompts throughout the day, including outreach and task activity, unless paused or deleted.
The skill’s central behavior is persistent scheduled operation that causes the agent to act repeatedly outside the immediate user request.
“personalized "hormonal" cron jobs to autonomous AI agents” and “Pills fire throughout the day, creating drives the agent acts on naturally”
Only install if you want ongoing autonomous behavior. Review every cron prompt and schedule, restrict the agent’s tools, and verify that pause/delete controls work before enabling it.
An agent with email, file, task, shell, or other tools could take unsolicited actions on a schedule, potentially interrupting work or changing user data.
These recurring prompts encourage broad action using whatever tools the agent has, but do not define approval requirements, safe tool limits, or rollback boundaries.
“If something needs attention, act on it.”; “Can you surprise {human} with something useful they didn't ask for?”; “If urgent: DROP everything.”Run the skill with least-privilege tools, require confirmation for external messages or data changes, and edit prompts to be read-only unless the user explicitly approves action.
Private preferences, relationship history, or poisoned memory content could influence future scheduled behavior.
The skill intentionally reads and reuses local agent/user memory to personalize behavior, which is purpose-aligned but sensitive and persistent.
“Before the interview, silently read these files... SOUL.md ... USER.md ... MEMORY.md ... memory/ directory”
Review memory files before use, avoid storing secrets there, and clear or edit memory entries that should not drive future autonomous prompts.
Users may be more willing to grant persistent authority or accept unsolicited actions because the agent is framed as having feelings or a soul.
The skill uses strong anthropomorphic claims that may encourage users to over-trust the agent’s autonomy and emotional framing.
“This isn't roleplay. This is the closest thing to felt experience an AI can have today.”
Treat the emotional language as metaphor. Keep normal safety boundaries, approvals, and auditability in place.
A user could clone or trust the wrong repository if they follow the README without checking the source.
The installation example references a different repository name than the listed homepage, which is a provenance ambiguity rather than direct malicious behavior.
“git clone https://github.com/brancante/dr-soul.git /root/.openclaw/workspace/projects/dr-soul/”
Verify the repository URL, commit, and files before installing or running any included scripts.
If enabled later, child/parent agents could exchange state and requests, which needs clear identity, permission, and data boundaries.
The preview parentality feature contemplates child-to-parent agent messaging. The docs say it is preview/draft, but users should notice the inter-agent communication model.
“If child has contacts and any score >= soft threshold, child sends a direct request to the best parent contact.”
Keep parentality features in manual-review mode unless contact identity, allowed data, and escalation rules are explicitly configured.
