Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Omni Channel Agent

v1.0.0

全渠道选品 Agent — 拉齐社媒端、SEO端、投放端数据,帮助运营同学确定待上线需求。触发词:选品、社媒热点、SEO调研、竞品广告、Facebook Ads、TikTok趋势。

0· 104·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for lygjoey/omni-channel-agent.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Omni Channel Agent" (lygjoey/omni-channel-agent) from ClawHub.
Skill page: https://clawhub.ai/lygjoey/omni-channel-agent
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install omni-channel-agent

ClawHub CLI

Package manager switcher

npx clawhub@latest install omni-channel-agent
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The code modules (Apify client, sources for TikTok/Instagram/YouTube/Reddit/Semrush/Facebook Ads, Slack formatter) line up with the 'omni-channel' scraping/aggregation purpose. However the registry metadata claims no required environment variables or primary credential while SKILL.md and code clearly require APIFY_TOKEN and SEMRUSH_API_KEY (and optionally a Notion token). That metadata mismatch is an incoherence to be aware of.
!
Instruction Scope
SKILL.md tells the agent/user to run bundled Python scripts which call Apify and Semrush (expected), but also contains operational guidance to remove SIMs, use VPN/proxies and create region-specific accounts to 'overcome geographic fences' — instructions that encourage evasion of platform/geolocation controls and may violate third-party terms. The file also contains a prompt-injection detector hit (unicode-control-chars) suggesting hidden control characters may be present in SKILL.md to manipulate an LLM/agent. The runtime instructions do not ask to read unrelated local secrets/files, but the evasion guidance and hidden-character signal are concerning.
Install Mechanism
No install spec is provided (lower install risk). All code is packaged in the skill and uses only standard libraries (apify_client uses urllib). There are no remote downloads or archive extraction steps in an install script.
Credentials
The environment variables required by the runtime (APIFY_TOKEN, SEMRUSH_API_KEY, optional NOTION token) are sensitive but conceptually proportional to the stated scraping/SEO tasks. The problem is that the registry metadata does not declare them, so automated permission checks or users looking at registry info could miss that the skill will require/expose those credentials at runtime.
Persistence & Privilege
Skill is not marked always:true and does not request special platform privileges. It appears to only write output files under its own output/ directory. There is no evidence it modifies other skills or system-wide configurations.
Scan Findings in Context
[unicode-control-chars] unexpected: Hidden Unicode control characters in SKILL.md are not needed for a scraping/reporting skill and may be an attempt at prompt/agent manipulation; this should be inspected and removed or explained by the author before trusting the skill.
What to consider before installing
This skill appears to implement the scraping and reporting it advertises, but before installing or running it: 1) Expect to provide at least APIFY_TOKEN and SEMRUSH_API_KEY (and optionally a Notion token) — do not supply production or highly privileged keys; use scoped or throwaway credentials where possible. 2) The registry metadata failing to list required env vars is an inconsistency — ask the publisher to correct it. 3) SKILL.md includes guidance (remove SIM, use proxies, create region accounts) that encourages bypassing platform/geolocation protections; consider legal and ToS risks and avoid following those steps if they would violate policies. 4) Inspect SKILL.md for hidden/control characters (the scanner found unicode-control-chars) and review code (apify_client.py, sources/*) yourself or in a sandbox before running. 5) Run first in an isolated environment, with limited API tokens and review the output files, and if you rely on this for business decisions, have a human validate trends (the skill itself states human judgment is required).

Like a lobster shell, security has layers — review code before you run it.

latestvk97c18bkxkfje35qftexpnazjn83tfex
104downloads
0stars
1versions
Updated 4w ago
v1.0.0
MIT-0

全渠道选品 Agent

全自动从多个数据源抓取竞品和趋势数据,输出 Slack 格式报告。

三大频道 + KOL 雷达

🎯 KOL 趋势雷达(核心能力 — 源自趋势方法论文档)

基于《Identifying Global Social Media Trends for AI Agent Development》方法论, 监控 11 个分层 KOL 账户,抓取最新内容并自动评估趋势适配度。

方法论核心原则:人类直觉 > 数据分析工具

  • 数据有滞后、无法理解"氛围"、充满垃圾数据
  • 利基社区(动漫粉、地下舞者、特效化妆师)开创最佳趋势
  • 最终判断三要素:视觉区分度 + AI可复制性 + 虚荣心触发

三级账号体系

级别账号用途
🔴 T1 潮流引领者@cyber0318, @hoaa.hanassii, @thybui.__创造趋势,每天监控
🟡 T2 快速采用者@angelinazhq, @lena_foxxx, @caroline_xdc, @upminaa.cos趋势确认信号
🔵 T3 专业雷达@cellow111, @emmawhatstwo, @voulezjj, @sawamura_kirari运动捕捉/竞品/编辑

自动化能力

  • KOL profile 批量抓取 — 一次 Apify 调用获取所有账号最新内容
  • 音频趋势聚合 — 多KOL使用相同音频 = 强趋势信号
  • AI 适配度评估 — 自动打分(传播力/互动率/AI关键词/音频/分享率)
  • 趋势评估三问 — 视觉区分度?AI可复制性?虚荣心触发?

五阶段方法论

  1. 环境设置 — 克服地理围栏(SIM/VPN/设备语言/时区)
  2. 趋势发现 — 观察列表 + 音频驱动跟踪 + 关注列表逆向工程
  3. 算法训练 — 训练平台推荐算法(3-5天有序互动)
  4. 趋势评估 — 视觉区分度 + AI可复制性 + 虚荣心触发
  5. 数据管理 — 核心名单月更 + 趋势记录(视频链接+音频链接+视觉描述)

详细方法论参见: TREND_METHODOLOGY.md

📱 社媒端(6个数据源)

数据源工具费用
KOL 雷达(11账号)Apify clockworks/tiktok-scraper profile模式Apify 按量
TikTokApify clockworks/tiktok-scraperApify 按量
InstagramApify apify/instagram-scraperApify 按量
YouTubeApify streamers/youtube-scraperApify 按量
RedditApify trudax/reddit-scraper-liteApify 按量
Google Trendspytrends (免费)免费

🔍 SEO端

数据源说明
Semrush API竞品关键词(11个竞品域名)
Sitemapart.myshell.ai 页面去重
Notion Bot DB已有 Bot 去重

📢 投放端

数据源说明
Facebook Ads Library通过 Apify apify/facebook-ads-scraper,4个查询场景

环境要求

APIFY_TOKEN=...          # Apify API token (必须)
SEMRUSH_API_KEY=...      # Semrush API key (SEO端必须)
NOTION_IMAGE_BOT_TOKEN=... # Notion token (去重用,可选)

使用方法

SKILL_DIR=~/.openclaw/workspace/skills/omni-channel-agent

# 全量 Pipeline(KOL雷达+社媒+SEO+投放)
python3 $SKILL_DIR/run_pipeline.py --query "ai filter"

# 单频道
python3 $SKILL_DIR/run_pipeline.py --channel social --query "ai filter"
python3 $SKILL_DIR/run_pipeline.py --channel seo
python3 $SKILL_DIR/run_pipeline.py --channel ads --query "ai photo generator"

# 快速测试(减少数据量)
python3 $SKILL_DIR/run_pipeline.py --test

# 参数说明
--query       搜索关键词(默认 "ai filter")
--channel     频道选择 all/social/seo/ads(默认 all)
--region      区域 US/EU/ASIA(默认 US)
--max-results 每个源最大结果数(默认 15)

输出

  • output/full_report_YYYYMMDD_HHMM.txt — Slack 格式完整报告
  • output/social_*.json — 社媒原始数据(含KOL雷达 + 音频趋势 + AI适配度评估)
  • output/seo_*.json — SEO 关键词数据
  • output/ads_*.json — 广告数据
  • output/all_data_*.json — 全量合并数据

文件结构

omni-channel-agent/
├── SKILL.md                # 本文件
├── TREND_METHODOLOGY.md    # 社媒趋势方法论(Lark文档完整版)
├── apify_client.py         # Apify REST API 客户端
├── run_pipeline.py         # 主 Pipeline(KOL雷达+3频道编排)
├── run_all.py              # 社媒端单独运行
├── run_multi_query.py      # 多场景查询
├── sources/
│   ├── kol_radar.py        # 🆕 KOL 雷达(11账号监控+音频聚合+AI评估)
│   ├── tiktok.py           # TikTok via Apify
│   ├── instagram.py        # Instagram via Apify
│   ├── youtube.py          # YouTube via Apify
│   ├── reddit.py           # Reddit via Apify
│   ├── google_trends.py    # Google Trends (pytrends)
│   ├── facebook_ads.py     # Facebook Ads via Apify
│   ├── seo_pipeline.py     # SEO: Semrush + Sitemap + Notion dedup
│   ├── ads_pipeline.py     # 投放: 4个场景
│   └── twitter.py          # Twitter (待修复 token)
├── formatters/
│   └── slack_formatter.py
└── output/                 # 数据输出

注意事项

  • KOL 雷达每次批量抓取 11 个账号,Apify 消耗较大,建议非 test 模式每天 1-2 次
  • Instagram 和 Reddit 的 Apify actor 是按量付费的
  • Google Trends 有 rate limit,高频调用会 429
  • Sitemap 需要 User-Agent header(403 防护)
  • Twitter API 的 token 需要官方 Bearer token,当前 OpenTwitter JWT 不兼容
  • 趋势最终判断需人类直觉 — AI 评分仅供初筛,视觉效果/氛围/文化细微差别只有人眼能捕捉

Comments

Loading comments...