Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Agent Analytics Autoresearch

v1.0.6

Run an autoresearch-style growth loop for landing pages, onboarding, pricing, and experiment candidates. Collect or read analytics snapshots, preserve produc...

0· 69·1 current·1 all-time
byDanny Shmueli@dannyshmueli
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (autoresearch, analytics-driven variant generation) align with the included scripts and SKILL.md. The skill requires npx (used to run @agent-analytics/cli) which is appropriate for collecting analytics snapshots; no unrelated credentials, binaries, or config paths are requested.
Instruction Scope
Runtime instructions are scoped to reading/writing local run files (brief.md, results.tsv, final_variants.md, data snapshots) and generating variants. The SKILL.md explicitly forbids editing production code without human approval. It instructs use of analytics data sources (Agent Analytics CLI, CSV, SQL, screenshots) only for variant generation and judging—no hidden file reads or unspecified exfiltration steps are present.
Install Mechanism
There is no formal install spec (instruction-only), which keeps risk low. The included scripts call npx --yes to fetch and run @agent-analytics/cli at runtime; this will download and execute code from npm when collecting snapshots. That is expected for this purpose but carries the usual moderate risk of running third-party packages fetched at runtime—review the CLI package if you need stronger assurance.
Credentials
The skill declares no required environment variables or credentials. The only runtime dependency is the npx binary. Example shell snippets use local variables (PROJECT_SLUG, PRIMARY_EVENT, etc.) but these are inputs for the analytics commands, not hidden credential requests.
Persistence & Privilege
always is false and the skill does not request system-wide changes. It reads and writes files only within the run directory it creates/uses and does not modify other skills or global agent settings. The SKILL.md enforces a review-before-implement policy for any outer-loop actions.
Assessment
This skill appears to do what it claims: create a local run folder, collect analytics snapshots (by running @agent-analytics/cli via npx), and produce reviewable experiment variants. Before installing or running: (1) be aware that npx --yes will download and execute a package from npm at runtime—review @agent-analytics/cli (source or package metadata) if you require assurance; (2) the scripts write files under a run directory (brief.md, results.tsv, data/…), so run them in a sandbox or repository you control; (3) be careful what analytics data you include—avoid copying PII into snapshots; (4) the skill will not change production systems unless you explicitly approve the outer-loop implementation, but always verify any follow-up commands before consenting to automated implementation. If you want more assurance, ask the author for the CLI source link or run the snapshot commands manually first.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

Any binnpx
ab-testingvk978t4dgx0btp0b4f1h4cbhh2984yx8nanalyticsvk978t4dgx0btp0b4f1h4cbhh2984yx8nautoresearchvk978t4dgx0btp0b4f1h4cbhh2984yx8nexperimentsvk978t4dgx0btp0b4f1h4cbhh2984yx8ngrowthvk978t4dgx0btp0b4f1h4cbhh2984yx8nlatestvk97cvdrw7cc2y6wcrfvw3h1hp5854x1d
69downloads
0stars
6versions
Updated 11h ago
v1.0.6
MIT-0

Agent Analytics Autoresearch

Use this skill when the user wants a data-informed growth loop for landing pages, onboarding, pricing, CTAs, signup, checkout, activation, or other experiment candidates.

This skill is based on:

Use the regular agent-analytics skill for general setup, tracking installation, ad hoc reporting, and normal experiment operations. Use this skill for structured variant generation and judging from a project brief plus analytics data.

Core Rule

Do not edit production copy, product code, or live experiment setup while running the loop unless the user explicitly asks. Produce reviewable artifacts first.

Default mode is review-only: generate variants, log rounds, and write final_variants.md.

After explicit human approval, continue into the outer experiment loop when requested: implement the approved variant or variants, create the experiment, run it, measure it with Agent Analytics or another analytics source, save the results as the next snapshot, and start the next autoresearch run from evidence.

Inputs

The loop needs:

  • target surface
  • current control copy
  • product truth
  • audience
  • primary metric
  • proxy metric
  • guardrails
  • analytics snapshot or data brief
  • drift constraints

Agent Analytics is preferred, but not required. Accept any evidence source: Agent Analytics CLI/API, PostHog, GA4, Mixpanel, SQL, CSV exports, product logs, dashboard screenshots summarized by the user, or hand-written notes.

When Agent Analytics is the evidence source, use project context as the self-improving product memory for the loop. Read context get <project> before collecting a snapshot, fold project_context into the product truth and metric definitions, and keep activation/event meaning separate per project or domain. After a human correction, scanner result, completed experiment, or repeated measured finding, update context only with durable product truth. Save activation definitions, event meanings, stable goals, and confirmed interpretations; skip weekly numbers, temporary spikes, pasted reports, PII, and unconfirmed guesses.

Quick Start

If the user already has a repo or run folder, work there. Otherwise initialize a run:

bash <skill_dir>/scripts/init_autoresearch_run.sh homepage-signup

Then fill brief.md, collect or paste data, and run the loop:

Read brief.md and run the autoresearch growth loop. Use the latest data snapshot. Run 5 rounds. Append one row per round to results.tsv and write final_variants.md with two distinct variants for review.

When using Agent Analytics, collect a snapshot:

bash <skill_dir>/scripts/collect_agent_analytics_snapshot.sh my-site signup cta_click

If <skill_dir> is not obvious in the runtime, read the script from this skill's scripts/ folder and run an equivalent local command.

References

Load these files only when needed:

  • references/program.md - exact loop instructions.
  • references/brief-template.md - project brief template.
  • references/final-variants-template.md - final output template.
  • references/results-header.txt - exact results.tsv header.

Loop Shape

Inner Autoresearch Loop

  1. Define the surface, control, audience, product truth, metric, proxy, and guardrails.
  2. Collect or read a dated analytics snapshot.
  3. Summarize useful signals and data limitations.
  4. Generate candidate A.
  5. Critique A harshly for genericness, drift, unsupported claims, weak conversion intent, and competitor-sayable language.
  6. Write candidate B from the critique.
  7. Synthesize AB from the strongest parts of A and B.
  8. Blind-rank A, B, and AB with Borda scoring.
  9. Append one TSV-safe row to results.tsv.
  10. Repeat several rounds.
  11. Write final_variants.md with two distinct variants and the recommended experiment shape.

Outer Experiment Loop

Only run this phase when the user explicitly approves implementation or experiment setup.

  1. Implement the approved variant or variants in the target product surface.
  2. Create the experiment with a control and the approved candidate variants.
  3. Verify tracking for the primary metric, proxy metric, and guardrails.
  4. Let the experiment collect real behavior for the requested window.
  5. Pull experiment results, screenshots or changed-copy notes, funnel movement, guardrails, and data limitations into a new snapshot.
  6. Start the next inner autoresearch loop from that measured evidence.

The outer loop prevents the LLM panel from becoming the final judge. LLMs generate and criticize, humans approve risk, and users decide what worked.

Agent Analytics Snapshot

Use the official CLI when collecting live Agent Analytics data:

npx --yes @agent-analytics/cli@0.5.20 insights "$PROJECT_SLUG" --period 7d
npx --yes @agent-analytics/cli@0.5.20 pages "$PROJECT_SLUG" --since 7d
npx --yes @agent-analytics/cli@0.5.20 funnel "$PROJECT_SLUG" --steps "page_view,$PROXY_EVENT,$PRIMARY_EVENT" --since 7d
npx --yes @agent-analytics/cli@0.5.20 events "$PROJECT_SLUG" --event "$PROXY_EVENT" --days 7 --limit 50
npx --yes @agent-analytics/cli@0.5.20 events "$PROJECT_SLUG" --event "$PRIMARY_EVENT" --days 7 --limit 50
npx --yes @agent-analytics/cli@0.5.20 experiments list "$PROJECT_SLUG"

If login is needed, prefer the regular agent-analytics skill's browser approval or detached login guidance.

Before interpreting the snapshot, also read the compact project memory:

npx --yes @agent-analytics/cli@0.5.20 context get "$PROJECT_SLUG"

If the autoresearch run reveals durable product truth that should guide future analytics, use the regular agent-analytics skill's project context workflow to read the existing context, merge the compact update, and write it back. Do not store raw round notes or time-bound metric values as project context.

Scoring

Use Borda scoring:

  • first place: 2 points
  • second place: 1 point
  • third place: 0 points

Judge by:

  • specificity to the product
  • clarity for the target audience
  • likely primary-event intent
  • preservation of product truth
  • low competitor-sayable language
  • fit with analytics data
  • respect for guardrails

Output

final_variants.md must include:

  • candidate_1
  • candidate_2
  • exact changed copy
  • rationale
  • risks
  • recommended experiment name
  • experiment shape
  • data limitations
  • clear note that the experiment has not been wired yet

Only create or wire an experiment after explicit human approval.

Comments

Loading comments...