Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Content Ops

v0.1.1

Social media content operations automation system with SQLite database. Manage content crawling, curation, publishing, and analytics across platforms (Xiaoho...

0· 341·1 current·1 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
high confidence
Purpose & Capability
The repository contents (crawl/publish scripts, DB schema, image-generation docs) are consistent with a 'content ops' system. However the registry metadata claims no required env vars or install steps while the documentation and code clearly require credentials and external services (OpenAI API key, Xiaohongshu cookies, Reddit API creds, MCP binaries). That mismatch between declared requirements and actual needs is unexpected and reduces trust.
!
Instruction Scope
SKILL.md and other docs instruct the operator/agent to: download and run a third‑party MCP binary, store and read cookies/secret files, write secrets.json, run database migrations, run background services (screen), create cron jobs, and perform browser automation / scraping that uses reverse-engineering code. The instructions reference and read/write many local paths and secrets that are outside a minimal 'skill' surface — this is broader than the metadata suggests and gives the skill access to credentials and persistent system state.
!
Install Mechanism
There is no formal install spec in registry metadata despite 72+ code files and explicit install steps in the docs. The docs instruct downloading a GitHub release tarball and extracting it into ~/.openclaw/workspace/bin and starting it in screen. The download host is GitHub releases (reasonable), but the absence of a declared install step + many dependencies (playwright, native binaries) makes the install footprint non-trivial and under-documented. The package.json lists extra MCP packages and playwright which may pull heavy native components.
!
Credentials
Registry metadata declares no required env vars or primary credential, but the docs and code reference several sensitive credentials: OPENAI_API_KEY (image generation), XIAOHONGSHU_COOKIE (scraping), REDDIT_CLIENT_ID/REDDIT_CLIENT_SECRET (API publishing), Discord webhook URLs, and writing secrets.json. Requiring and storing these secrets is reasonable for the feature set, but the omission from the declared requirements is an incoherence and a red flag for accidental credential exposure or misconfiguration.
!
Persistence & Privilege
The documentation instructs starting persistent background services (xiaohongshu-mcp in screen), saving cookie/session files and adding system cron jobs to run periodic tasks. While persistence can be legitimate for automation tools, this skill explicitly tells operators to create persistent system components and store credentials on disk — increasing blast radius if the code is malicious or buggy. The skill does not set always:true, but it does ask users to grant long-term system presence manually.
Scan Findings in Context
[base64-block] unexpected: Pre-scan found a 'base64-block' pattern in SKILL.md content. The provided SKILL.md excerpt does not obviously show a base64 blob, so this could indicate hidden/obfuscated data or an attempt to inject payloads. Base64 blocks are not expected for a content-ops user guide and should be reviewed manually.
What to consider before installing
This repo looks like a working content-ops system, but several red flags mean you should not install it blindly. Before use: 1) Review all code (especially scripts that run shell commands, download binaries, perform encryption/decryption or write secrets) and search for any hidden/encoded payloads (base64, eval, exec). 2) Don’t reuse high‑privilege credentials — create dedicated, limited test accounts for Reddit/Xiaohongshu and a separate OpenAI key with tight usage limits. 3) If you must run it, do so in an isolated environment (container or VM) and avoid putting API keys into world-readable files; prefer ephemeral env vars. 4) Verify the GitHub release binary integrity (check the release author and checksums) before running it. 5) Audit any code that claims to 'reverse' platform crypto or bypass anti-bot protections — using such code may violate platform terms and increase risk. 6) If you’re uncomfortable auditing, don’t provide cookies/API keys and consider not installing; ask the author for a minimal, declarative install manifest and a list of exact env vars the skill will use.

Like a lobster shell, security has layers — review code before you run it.

latestvk97fx3ctgpawzfbd3s1nnrmdcn82286s

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

SKILL.md

Content Ops System

社交媒体内容运营自动化系统,使用 SQLite + Drizzle ORM 存储数据,支持小红书、Reddit、Pinterest、Discord 等平台的内容抓取、策划、发布和数据分析。


📋 目录

  1. 初始化部署
  2. 测试任务
  3. 正式任务
  4. 工作流详解
  5. 参考文档

一、初始化部署

1.1 基础环境

Node.js 依赖

cd /home/admin/.openclaw/workspace/skills/content-ops

# 安装依赖
npm install

# 生成并执行数据库迁移
npx drizzle-kit generate
npx drizzle-kit migrate

Python 依赖(可选,用于增强功能)

# 如果需要使用 xiaohongshutools skill
pip install aiohttp loguru pycryptodome getuseragent requests

1.2 MCP 服务部署

小红书 MCP (xpzouying/xiaohongshu-mcp)

下载部署:

cd ~/.openclaw/workspace/bin

# 下载二进制文件
wget https://github.com/xpzouying/xiaohongshu-mcp/releases/download/v2026.02.28.1720-8a7fe21/xiaohongshu-mcp-linux-amd64.tar.gz
tar -xzf xiaohongshu-mcp-linux-amd64.tar.gz

# 登录(首次,扫码)
./xiaohongshu-login

# 启动服务(后台运行)
screen -dmS xhs-mcp ./xiaohongshu-mcp -headless=true

服务信息:

  • 端口:18060
  • 端点:http://localhost:18060
  • Cookie 文件:~/.openclaw/workspace/bin/cookies.json

验证服务:

curl http://localhost:18060/api/v1/login/status

1.3 数据库初始化

自动创建的数据表:

表名用途核心字段
target_accounts被运营账号(Reddit等)platform, api_config, positioning
source_accounts信息源账号(小红书等)login_status, daily_quota
crawl_tasks抓取任务status, query_list, target_count
crawl_results抓取结果source_url, content, quality_score
publish_tasks发布任务status, content, scheduled_at
publish_metrics_daily发布内容每日数据metric_date, reddit_score
target_accounts_metrics_daily账号整体每日数据followers_change, engagement_rate

数据库位置:

~/.openclaw/workspace/content-ops-workspace/data/content-ops.db

1.4 账号配置

添加小红书信息源账号

npx tsx scripts/add-xhs-account.ts

添加 Reddit 目标账号

npx tsx scripts/add-reddit-account.ts

二、测试任务

2.1 测试小红书抓取(无需登录)

# 测试搜索
curl -X POST http://localhost:18060/api/v1/feeds/search \
  -H "Content-Type: application/json" \
  -d '{"keyword": "AI人工智能", "filters": {"sort_by": "最多点赞"}}'

2.2 测试 MCP 服务状态

# 检查登录状态
curl http://localhost:18060/api/v1/login/status

# 预期返回:
# {"success": true, "data": {"is_logged_in": true, "username": "xxx"}}

2.3 测试数据库连接

# 查看数据概览
npx tsx scripts/show-overview.ts

2.4 完整测试流程

# 1. 创建测试抓取任务
npx tsx scripts/create-crawl-task.ts --keyword "AI教程" --count 5

# 2. 执行抓取
npx tsx scripts/execute-crawl.ts --task-id <task-id>

# 3. 查看结果
npx tsx scripts/show-crawl-results.ts --task-id <task-id>

# 4. 审核(测试用:全部通过)
npx tsx scripts/approve-all.ts --task-id <task-id>

三、正式任务

3.1 内容抓取 Workflow

Step 1: 创建抓取任务

npx tsx scripts/create-crawl-task.ts \
  --platform xiaohongshu \
  --keywords "AI人工智能,ChatGPT,AI工具" \
  --sort-by "最多点赞" \
  --target-count 50

Step 2: 查看待审核列表

npx tsx scripts/show-crawl-results.ts --task-id <task-id>

Step 3: 人工审核

# 通过指定序号
npx tsx scripts/approve-items.ts --task-id <task-id> --items 1,2,3,5

# 或全部通过
npx tsx scripts/approve-all.ts --task-id <task-id>

Step 4: 补充详情(可选)

# 查看需要补充详情的列表
npx tsx scripts/show-pending-details.ts

# 用户提供详情后导入
npx tsx scripts/import-manual-detail.ts --input /tmp/manual_details.txt

3.2 内容发布 Workflow

Step 1: 选择语料创建发布任务

npx tsx scripts/create-publish-task.ts \
  --source-ids <note-id-1>,<note-id-2> \
  --target-platform reddit \
  --target-account <account-id>

Step 2: 生成内容(AI redesign)

npx tsx scripts/generate-content.ts --task-id <publish-task-id>

Step 3: 审核发布内容

npx tsx scripts/review-publish-content.ts --task-id <publish-task-id>

Step 4: 执行发布

npx tsx scripts/execute-publish.ts --task-id <publish-task-id>

3.3 数据复盘 Workflow

# 抓取昨日数据
npx tsx scripts/fetch-metrics.ts --date yesterday

# 生成数据报告
npx tsx scripts/generate-report.ts --period 7d

四、工作流详解

4.1 内容抓取流程

用户确认主题
    ↓
创建抓取任务 (crawl_tasks)
    ↓
调用 /api/v1/feeds/search 获取列表
    ↓
保存结果到 crawl_results (标题、互动数据)
    ↓
通知人工确认
    ↓
审核通过 → 标记为可用 (curation_status='approved')
    ↓
(可选)人工补充详情正文

⚠️ 抓取限制说明:

小红书网页端有严格的反爬机制:

  1. 搜索列表 ✅ 可用

    • 可获取:标题、作者、互动数据(点赞/收藏/评论数)、封面图
    • 可识别:内容类型(video/normal)
  2. 详情接口 ❌ 受限

    • 多数笔记返回 "笔记不可访问" 或空数据
    • 无法获取:完整正文、评论列表
    • 原因:小红书 App-only 内容限制

4.2 人工辅助详情导入

当自动抓取无法获取详情时,支持人工补充:

查看待补充列表:

npx tsx scripts/show-pending-details.ts

用户提供详情格式:

详情 1
[复制粘贴第一篇笔记的正文内容]
---
详情 3
[复制粘贴第三篇笔记的正文内容]
---

导入到数据库:

npx tsx scripts/import-manual-detail.ts --input /tmp/manual_details.txt

数据会同时保存到:

  • crawl_results 表的 content 字段
  • corpus/manual/ 目录的 JSON 文件

4.3 内容发布流程

选择可用语料 (crawl_results)
    ↓
创建发布任务 (publish_tasks) - status='draft'
    ↓
AI 基于语料生成内容 → status='pending_review'
    ↓
人工审核 → status='approved'
    ↓
定时发布 → status='scheduled' → 'published'
    ↓
每日抓取数据 (publish_metrics_daily)

五、参考文档

文档说明给谁看
使用流程手册完整操作流程,从安装到日常运营👤 用户必看
快速上手指南10分钟快速启动👤 新用户
数据库表结构完整表结构🤖 开发者
详细工序设计多Agent协作流程🤖 开发者

常用查询

首页看板数据:

const stats = await queries.getOverviewStats();
// {
//   activeAccounts: 5,
//   todayScheduledTasks: 3,
//   pendingCorpus: 20,
//   availableCorpus: 150,
//   weeklyPublished: 21
// }

账号7天趋势:

const trend = await queries.getAccountTrend(accountId, 7);

内容表现排行:

const topContent = await queries.getTopPerformingContent(accountId, 30, 10);

数据库备份

# 复制文件即可备份
cp ~/.openclaw/workspace/content-ops-workspace/data/content-ops.db \
   ~/.openclaw/workspace/content-ops-workspace/data/backup-$(date +%Y%m%d).db

目录结构

~/.openclaw/workspace/content-ops-workspace/
├── data/
│   └── content-ops.db          # SQLite 数据库文件
├── accounts/                    # Markdown 账号档案
├── strategies/                  # 运营策略文档
├── corpus/
│   ├── raw/                    # 原始抓取语料
│   ├── manual/                 # 人工导入语料
│   └── published/              # 已发布内容
└── reports/                    # 数据报告

快速检查清单

部署前检查

  • Node.js 依赖安装完成 (npm install)
  • 数据库迁移执行完成 (npx drizzle-kit migrate)
  • 小红书 MCP 服务运行中 (curl http://localhost:18060/api/v1/login/status)
  • Cookie 文件存在 (~/.openclaw/workspace/bin/cookies.json)

测试任务检查

  • MCP 登录状态正常
  • 测试搜索能返回结果
  • 数据库能写入数据
  • 审核流程正常

正式任务检查

  • 源账号已添加 (source_accounts)
  • 目标账号已添加 (target_accounts)
  • 抓取任务创建成功
  • 发布任务能正常生成内容

Files

108 total
Select a file
Select a file to preview.

Comments

Loading comments…