Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Forever Healthy AI4L - AI for Practical Longevity

v0.1.0

AI4L - Enabling everyone to use AI to generate high-quality, evidence-based reviews of interventions aimed at optimizing health and longevity.

0· 220·0 current·0 all-time
byMichael Greve@epicoun

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for epicoun/ai4l.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Forever Healthy AI4L - AI for Practical Longevity" (epicoun/ai4l) from ClawHub.
Skill page: https://clawhub.ai/epicoun/ai4l
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install ai4l

ClawHub CLI

Package manager switcher

npx clawhub@latest install ai4l
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name and description (AI4L Evidence Review Toolkit) align with the instructions to create/audit evidence reviews and QA checklists. The skill requests no credentials or installs, which is proportionate. However, the hard-coded default topic ('Using Telmisartan to Improve Health and Longevity') is narrowly prescriptive for a general toolkit and may be unexpected. VERIFY's requirement to check and "fix" repository files (SKILL.md, README.md, etc.) suggests write access to project files — plausible for a toolkit but should be explicit and user-approved.
!
Instruction Scope
SKILL.md contains contradictory and vague directives: AI4L.md says 'No Sub-agents' for auditors while SUBAUDIT/FULL/ITERATE explicitly instruct launching sub-agents (Opus). VERIFY directs fixing target files but the header rule says 'Do not edit files outside ./results/ unless explicitly granted permission' — VERIFY's 'fix them' step would modify files outside ./results/ without obtaining that explicit permission. The audit process also repeatedly instructs 'write and run a script' to parse results, but no script is provided; that implies the agent may run arbitrary scripts in the environment. These inconsistencies create unclear authority and potential for unexpected file edits or command execution.
Install Mechanism
Instruction-only skill with no install spec and no code files — lowest install risk. Nothing is downloaded or written by an installer step.
Credentials
The skill declares no required environment variables, credentials, or config paths. There are no apparent requests for secrets or external service keys in the instructions.
Persistence & Privilege
always is false and the skill does not request system-wide config or persistent presence. However, it instructs autonomous actions (launching sub-agents, running scripts) which is platform-default behavior; combined with the instruction ambiguities above, this could increase risk if the agent is allowed to run commands or modify repository files without explicit user confirmation.
What to consider before installing
This skill is broadly coherent with an evidence-review toolkit but contains conflicting and vague runtime rules that could cause it to modify files or run scripts without explicit, clear consent. Before installing: (1) Confirm whether you want the skill to be allowed to edit repository files outside ./results/ — VERIFY explicitly says to "fix" files but also says not to edit files outside ./results/ unless the user grants permission. (2) Decide whether you allow the agent to launch sub-agents (the skill requests Opus) and to run parsing scripts — if not, deny those capabilities or sandbox the skill. (3) Note the default topic is a prescription drug (telmisartan); if you don't want medical/drug-specific defaults, request the author change it. (4) Ask the skill author to resolve contradictions (no-sub-agent vs. SUBAUDIT) and to supply or describe any scripts the skill expects to run. If you proceed, run the skill in a restricted/sandboxed environment and require explicit confirmation before any edits outside ./results/ or before executing any code.

Like a lobster shell, security has layers — review code before you run it.

developmentvk970pg7mbh6wbbarcmxcktrhah837nc1latestvk970pg7mbh6wbbarcmxcktrhah837nc1
220downloads
0stars
2versions
Updated 23h ago
v0.1.0
MIT-0

Copyright (c) 2026 Forever Healthy Foundation

AI4L Evidence Review Toolkit

Version: 2026.03.19.1

This skill handles all Evidence Review workflows.

Key Files

  • AI4L.md — The QA audit checklist for ERs

General Rules

  • Parse the user's input to determine which command to execute

  • Note the start time (HH:MM:SS) when beginning any command, and report the time taken when done

  • All generated results go in ./results/ as .md files

  • All references to "ER.md" and "QA.md" files are relative to ./results/

  • Do not edit or modify any files outside ./results/ unless explicitly granted permission by the user on a case-by-case basis.

  • Set [default_topic] to “Using Telmisartan to Improve Health and Longevity”

Command: VERIFY

Trigger: "verify"

Target Files

  • AI4L.md
  • CLAUDE.md
  • README.md
  • PERSONA.md
  • SKILL.md
  • ./docs/AI Models.md
  • ./examples/README.md

Process

  • Verify all target file version numbers. Use the version stated at the start of AI4L.md in the alt text (not the badge) as a reference. Make sure that all targets are consistent with it, including the version number in the badges. SKILL.md does not have a badge, just a plain-text version number, so make sure that matches as well.

  • Check the numbering of all items and the item count in AI4L.md

  • Verify all target files for consistency and completeness

  • If there are any inconsistencies, fix them.

  • Report what was checked and what (if anything) was fixed.

Command: CREATE

Trigger: "create"

Topic Parsing

  • Set [remainder] to the rest of the input after the command trigger word "create"

  • If [remainder] is empty, set [topic] to the [default_topic]

  • If [remainder] looks like "Using <intervention> for/as/to <goal>", set [topic] to [remainder]

  • If [remainder] contains only an intervention and no goal, set [topic] to "Using <intervention> to Improve Health and Longevity"

  • Notify the user that an ER will be created for [topic]

  • Create an ER for [topic] that can pass a QA audit as described in "AI4L.md"

  • Save the result as an .md file in ./results/ using the filename given in the result

  • Report the filename and location when done.

Command: SUBAUDIT (with sub-agent)

Trigger: "subaudit"

Audit an ER using a sub-agent

Determine the Target

  • If no further information is given, set [target] to the last evidence review generated; otherwise, take the remainder of the input as [target]

  • If [target] = "all", audit all "ER.md" files that have not been audited yet using the instructions in "AI4L.md"

Audit (Sub-agent)

Launch sub-agent, with Opus as its model, to audit the ER using the prompt given below. DO NOT pass any other instructions to the sub-agent besides the prompt.

  • State your model name and version number to the user
  • Audit the [target] file using the instructions in "AI4L.md"
  • Do NOT use any sub-agents for the task. Do things step-by-step.
  • Save the result in ./results/ using the name defined in the result
  • Do not modify any files outside ./results/

Report and Offer Fix

  • Read the audit output and report the pass rate

  • If not 100%, and the audit was done by the same AI model that generated the ER, ask the user if they want to fix it

  • If yes, read the audit file, identify all failed items, and fix the ER based on the auditor's comments. DO NOT modify the QA file. Only the ER may be edited during the fix step.

Command: AUDIT (no sub-agent)

Trigger: "audit"

Audit an ER without using a sub-agent

Determine the Target

  • If no further information is given, set [target] to the last evidence review generated; otherwise, take the remainder of the input as [target]

  • If [target] = "all", audit all "ER.md" files that have not been audited yet using the instructions in "AI4L.md"

Do the actual audit

  • Audit the [target] file using the instructions in "AI4L.md"

  • Do NOT use any sub-agents for the task. Do things step-by-step.

  • Save the result in ./results/ using the name defined in the result

  • Do not modify any files outside ./results/

Report and Offer Fix

  • Read the audit output and report the pass rate

  • If the pass rate is not 100%, and the audit was done by the same AI model that generated the ER, ask the user if they want to fix it

  • If yes fix the ER based on the audit results. DO NOT modify the QA file. Only the ER may be edited during the fix step.

Command: FULL

Trigger: "FULL"

Run the complete single-pass workflow: create an ER, audit it, and fix any issues.

Process

  1. Create — follow the CREATE command process (sub-agent creates the ER)
  2. Audit — follow the AUDIT command process (fresh sub-agent audits it)
  3. Fix — read the audit results, identify all failures, and fix the ER automatically

Report

After saving the fixed ER, report the ER filename, the audit filename, the pass rate, and the time taken.

Command: ITERATE

Trigger: "iterate"

Creates an ER, then loops audit/fix cycles up to 10 times until two consecutive audits show 100% pass rate.

Process

  1. Parse topic — same logic as the ER command

  2. Create ER — launch sub-agent (same as ER command)

  3. Audit loop:

Initialize: iteration = 0, consecutive_passes = 0, max_iterations = 10

Loop while iteration < max_iterations and consecutive_passes < 2:

a. Audit — launch a fresh sub-agent (same prompt as Audit command). Fresh context is critical — the auditor must have no knowledge of the ER creation or prior audits.

b. Extract pass rate — read the audit file, extract from the summary table. If ambiguous, run a script to parse and calculate.

c. Evaluate — if 100%, increment consecutive_passes; otherwise reset to 0. Report: "Iteration {n}: Pass rate = {rate}% ({consecutive_passes}/2 consecutive passes needed)"

d. Fix (if needed) — if consecutive_passes < 2, read the audit file, identify all failed items, fix the ER. The fix step is done by the orchestrator (not a sub-agent) since it needs the context of both the ER and the audit. Increment iteration.

Report

  • Success: "Pipeline complete. The ER passed two consecutive audits with a 100% pass rate after {n} iteration(s)."
  • Limit reached: "Pipeline stopped after 10 iterations. Best pass rate achieved: {rate}%. The latest ER and audit files are in ./results/ for manual review."

List all files generated in ./results/.

Command: COMPARE

Trigger: "COMPARE"

Process

  • If no further information is given, set [intervention] to the intervention of the latest "ER.md" in "./results/" (parse the filename to extract the intervention)

  • Otherwise, take the remainder of the input as [intervention]

  • Compare all [intervention] "ER.md" files by the quality of the content. Be detailed. Also, take into account the latest [intervention] "QA.md" for each of them.

  • Present a clear recommendation of which ER is strongest and why.

Comments

Loading comments...