Ads Campaign Review

v1.0.0

Run retrospective analysis for campaigns on Meta (Facebook/Instagram), Google Ads, TikTok Ads, YouTube Ads, Amazon Ads, and DSP/programmatic channels.

1· 315·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for danyangliu-sandwichlab/campaign-retrospective-analyst.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Ads Campaign Review" (danyangliu-sandwichlab/campaign-retrospective-analyst) from ClawHub.
Skill page: https://clawhub.ai/danyangliu-sandwichlab/campaign-retrospective-analyst
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install danyangliu-sandwichlab/campaign-retrospective-analyst

ClawHub CLI

Package manager switcher

npx clawhub@latest install campaign-retrospective-analyst
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The skill's name/description (retrospective analysis for multiple ad platforms) aligns with the instructions (query plan, trend deltas, recommendations). One minor note: the SKILL.md expects access to campaign data (data_source_scope) but does not document how data is obtained or any platform credentials — this is plausible (user-supplied exports) but should be understood before connecting live platform APIs.
Instruction Scope
Instructions stay on-task: disambiguate metrics, build query slices, summarize findings, propose actions, and include guardrails. There are no steps that instruct reading unrelated local files, accessing system credentials, or sending data to unknown endpoints.
Install Mechanism
No install specification and no code files are present. Because this is instruction-only, nothing is written to disk or fetched during install.
Credentials
The skill declares no required environment variables, credentials, or config paths. This is proportionate to an instruction-only analysis skill that expects the user to supply data or connector-config externally. If you plan to adapt it to fetch data from ad platforms, expect to add the appropriate, least-privilege credentials separately.
Persistence & Privilege
always is false and the skill is user-invocable. It does not request persistent presence or modify other skills or system settings. Autonomous invocation is allowed by platform default but is not accompanied by other elevated privileges.
Assessment
This skill appears coherent and low-risk as-is because it only contains instructions and asks for user-provided data. Before installing or using it: (1) confirm how you'll supply campaign data — provide exported reports or a read-only connector rather than high-privilege API keys if possible; (2) if you integrate platform connectors later, apply least-privilege credentials and review where the agent will send data; (3) test with non-sensitive or anonymized data to validate outputs and guardrails; (4) monitor any automated use if you enable autonomous invocation, and restrict permissions for any integrations (billing/policy scopes) to minimize blast radius.

Like a lobster shell, security has layers — review code before you run it.

latestvk9798kgrzygx2et3d24npma651826ktg
315downloads
1stars
1versions
Updated 1mo ago
v1.0.0
MIT-0

Ads Campaign Review

Purpose

Core mission:

  • root-cause analysis, lesson extraction, next-cycle design

This skill is specialized for advertising workflows and should output actionable plans rather than generic advice.

When To Trigger

Use this skill when the user asks for:

  • ad execution guidance tied to business outcomes
  • growth decisions involving revenue, roas, cpa, or budget efficiency
  • platform-level actions for: Meta (Facebook/Instagram), Google Ads, TikTok Ads, YouTube Ads, Amazon Ads, DSP/programmatic
  • this specific capability: root-cause analysis, lesson extraction, next-cycle design

High-signal keywords:

  • ads, advertising, campaign, growth, revenue, profit
  • roas, cpa, roi, budget, bidding, traffic, conversion, funnel
  • meta, googleads, tiktokads, youtubeads, amazonads, shopifyads, dsp

Input Contract

Required:

  • question_or_report_goal
  • metric_scope: KPI, dimensions, and date range
  • data_source_scope

Optional:

  • attribution_window
  • benchmark_reference
  • dashboard_filters
  • confidence_threshold

Output Contract

  1. Metric Definition Clarification
  2. Query Plan
  3. Result Summary
  4. Interpretation and Caveats
  5. Decision Recommendation

Workflow

  1. Disambiguate metric definitions and time window.
  2. Build query slices by platform, funnel, and audience.
  3. Compute trend deltas and variance drivers.
  4. Summarize findings with confidence level.
  5. Propose concrete next actions.

Decision Rules

  • If metric definitions conflict, lock one canonical definition before analysis.
  • If sample size is small, mark result as directional not conclusive.
  • If attribution changes materially alter result, show both views.

Platform Notes

Primary scope:

  • Meta (Facebook/Instagram), Google Ads, TikTok Ads, YouTube Ads, Amazon Ads, DSP/programmatic

Platform behavior guidance:

  • Keep recommendations channel-aware; do not collapse all channels into one generic plan.
  • For Meta and TikTok Ads, prioritize creative testing cadence.
  • For Google Ads and Amazon Ads, prioritize demand-capture and query/listing intent.
  • For DSP/programmatic, prioritize audience control and frequency governance.

Constraints And Guardrails

  • Never fabricate metrics or policy outcomes.
  • Separate observed facts from assumptions.
  • Use measurable language for each proposed action.
  • Include at least one rollback or stop-loss condition when spend risk exists.

Failure Handling And Escalation

  • If critical inputs are missing, ask for only the minimum required fields.
  • If platform constraints conflict, show trade-offs and a safe default.
  • If confidence is low, mark it explicitly and provide a validation checklist.
  • If high-risk issues appear (policy, billing, tracking breakage), escalate with a structured handoff payload.

Code Examples

Query Spec Example

metric: roas
dimensions: [platform, campaign]
date_range: last_30d

Result Schema

{
  "platform": "Meta",
  "spend": 12000,
  "revenue": 42000,
  "roas": 3.5
}

Examples

Example 1: Daily report automation

Input:

  • Need 9AM daily summary for key campaigns
  • KPI: spend, cpa, roas

Output focus:

  • report schema
  • anomaly highlights
  • top next actions

Example 2: Attribution window comparison

Input:

  • 1d click vs 7d click disagreement
  • Decision needed for budget shift

Output focus:

  • side-by-side metric table
  • interpretation caveats
  • decision recommendation

Example 3: Traffic structure diagnosis

Input:

  • Revenue flat but traffic rising
  • Suspected quality decline

Output focus:

  • source mix decomposition
  • quality signal changes
  • corrective action plan

Quality Checklist

  • Required sections are complete and non-empty
  • Trigger keywords include at least 3 registry terms
  • Input and output contracts are operationally testable
  • Workflow and decision rules are capability-specific
  • Platform references are explicit and concrete
  • At least 3 practical examples are included

Comments

Loading comments...