Idle Reward Optimizer

v1.0.0

Design low-friction idle, light-interaction, and micro-progress actions for fragmented or low-energy time while protecting recovery. Use when the user wants...

0· 74·0 current·0 all-time
byhaidong@harrylabsj

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for harrylabsj/idle-reward-optimizer.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Idle Reward Optimizer" (harrylabsj/idle-reward-optimizer) from ClawHub.
Skill page: https://clawhub.ai/harrylabsj/idle-reward-optimizer
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install idle-reward-optimizer

ClawHub CLI

Package manager switcher

npx clawhub@latest install idle-reward-optimizer
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The name/description (design low-friction idle actions) matches the SKILL.md and the handler code: the handler reads SKILL.md, formats a guidance card, and returns text. The only minor mismatch is the metadata's claim that the skill is 'instruction-only' while repository includes handler.py and tests — but those files implement a harmless renderer for the instruction content and are coherent with the stated purpose.
Instruction Scope
SKILL.md instructs the agent to collect only user-provided context (fragmented time, energy, tasks) and to produce textual action packs. It explicitly states it does not create reminders/automations. The handler implementation only reads its local SKILL.md and the provided user input; it does not access other files, env vars, or network endpoints.
Install Mechanism
There is no install spec and no downloaded code at runtime. The skill ships with local Python files (handler and tests) which are readable and self-contained. No external package installs, URLs, or archive extraction are present.
Credentials
The skill declares no required environment variables, credentials, or config paths. The handler does not read environment variables or request secrets, which is proportionate to a purely textual guidance skill.
Persistence & Privilege
The skill does not request always:true and is user-invocable only; it does not modify other skills or system configuration. Autonomous invocation is allowed by platform default, but the skill's limited scope and lack of credential access keeps its privilege low.
Assessment
This skill appears to be a harmless, text-only guidance helper: it only reads its own SKILL.md and user input and returns a formatted plan. Before installing, you may (1) review handler.py to confirm no network calls are added, (2) run the included tests locally if you run third-party code, and (3) note the small mismatch that metadata says 'instruction-only' even though Python files are present — this is not dangerous but is worth being aware of. If you want to be extra cautious, run the skill in an environment without sensitive credentials available (it doesn't need any).

Like a lobster shell, security has layers — review code before you run it.

latestvk973ts5g2gkfynzpr6dwanfjrd84x256
74downloads
0stars
1versions
Updated 1w ago
v1.0.0
MIT-0

Idle Reward Optimizer

Chinese name: 挂机收益优化

Purpose

Help the user turn fragmented or low-energy windows into gentle progress loops without stealing recovery. This skill is descriptive only. It does not create reminders, automations, or time-tracking systems.

Use this skill when

  • The user keeps losing small pockets of time to mindless scrolling.
  • The user wants useful actions for waiting, commuting, transitions, or recovery periods.
  • The user has low energy and needs options lighter than full-focus work.
  • The user wants a repeatable “idle reward” system that feels kind instead of punishing.

Inputs to collect

  • Fragmented time windows and their usual length.
  • Low-energy periods, common locations, and interruption level.
  • Tasks or themes that benefit from tiny amounts of progress.
  • Recovery needs, boundaries, and times that should stay empty.

Workflow

  1. Map the user’s fragmented windows, low-energy zones, and common waiting scenes.
  2. Sort candidate actions into idle, light interaction, micro-progress, and maintenance buckets.
  3. Match each scene with one low-friction action pack that fits the real energy cost.
  4. Add reuse rules so the user can repeat the pack without re-deciding every time.
  5. End with leave-blank rules for windows that should stay restful.

Output Format

  • Fragmented time map with scene, energy level, and safe action intensity.
  • Idle reward actions that need almost no thought.
  • Micro-progress actions that fit inside one to five minutes.
  • Leave-blank rules that protect rest and recovery.

Quality bar

  • Protect recovery first, instead of trying to monetize every spare minute.
  • Every suggested action must be genuinely light enough for the stated context.
  • Include at least one reusable action loop that can compound over time.
  • Keep the plan realistic for family life, commuting, or interruptions.

Edge cases and limits

  • If the user sounds depleted, prioritize restorative idle options before productivity ideas.
  • If time windows are highly unpredictable, use scene-based menus rather than fixed schedules.
  • Do not present this skill as a replacement for timers, trackers, or automation tools.

Compatibility notes

  • Can pair conceptually with game-inventory-manager and boss-fight-stamina-manager.
  • Works well for family life, commuting gaps, transition time, and recovery periods.
  • Text only, with no reminder or scheduling integration.

Comments

Loading comments...