Safe Long-Run Mode GPT5.4
Analysis
This is a coherent instruction-only workflow for safer long-running GPT-5.4 tasks, with some normal cautions around checkpoints, subagents, and external services.
Findings (3)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Checks for instructions or behavior that redirect the agent, misuse tools, execute unexpected code, cascade across systems, exploit user trust, or continue outside the intended task.
Be careful with: Azure / Microsoft Graph; ClawHub / GitHub-backed operations; Orgo runtime and VM usage; websites / browser automation; messaging providers
The workflow anticipates use of external systems where existing agent permissions could cause real changes, although it frames this as cautionary and does not add new tools or credentials.
Checks for exposed credentials, poisoned memory or context, unclear communication boundaries, or sensitive data that could leave the user's control.
Always leave artifacts that make recovery easy: - notes - drafts - partial outputs - checkpoint files - project updates - result summaries
The skill intentionally creates persistent recovery artifacts, which is useful for long tasks but can retain sensitive task details if users do not scope what is saved.
Use subagents as workers ... Delegate when: tasks are independent; multiple files or systems are involved; work may take a while
The skill encourages delegation to subagents for long or parallel work, which may share task context across agents even though no specific agent channel or credential is introduced.
