Social Media Data Collector

ReviewAudited by ClawScan on May 15, 2026.

Overview

The skill is coherent and purpose-aligned for collecting social media metrics, but it uses credentials, external APIs, browser scraping, and Feishu Bitable updates that users should scope carefully.

This skill appears safe for its stated purpose if you intend to collect social media metrics and update a Feishu Bitable. Before using it, confirm the exact URLs, table ID, and record IDs, use least-privilege TikHub and Feishu credentials, and restrict cleanup to files created by this run.

Findings (5)

Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.

What this means

Using this skill with Feishu credentials can grant the script authority to update records in the specified Bitable app/table.

Why it was flagged

The script accepts a Feishu app secret, obtains a tenant access token, and uses it for Bitable access. This is purpose-aligned but sensitive credential handling.

Skill content
parser.add_argument("--app-secret", default=os.environ.get("FEISHU_APP_SECRET") ... token = get_token(args.app_id, args.app_secret)
Recommendation

Use a least-privilege Feishu app, store secrets securely, verify the app token and table ID before running, and rotate credentials if they are exposed.

What this means

Incorrect table IDs, record IDs, or extracted metrics could overwrite fields in the target Bitable records.

Why it was flagged

The script performs batch updates to Feishu Bitable records based on the provided results file. This matches the skill purpose, but it is a mutating external-account action.

Skill content
url = f"https://open.feishu.cn/open-apis/bitable/v1/apps/{app_token}/tables/{table_id}/records/batch_update"
Recommendation

Review the results JSON and target record IDs before writing, keep a backup/export of the table, and run updates only after confirming the target table.

What this means

Selected social media URLs or content identifiers, along with the TikHub authorization token, are sent to a third-party API provider.

Why it was flagged

The API tier sends user-provided social media URLs to the TikHub API. This is disclosed and expected for API-based metric collection, but it is an external provider data flow.

Skill content
TIKHUB_BASE = "https://api.tikhub.io" ... api_url = f"{TIKHUB_BASE}{config['path']}?{config['param']}={url}"
Recommendation

Only submit URLs that are appropriate for third-party processing and use a scoped TikHub token where possible.

What this means

If interpreted too broadly, cleanup could remove unrelated temporary JSON files or other files the user still needs.

Why it was flagged

The cleanup guidance is intended to remove generated artifacts, but the workspace JSON wording is broader than a fixed generated-file list.

Skill content
After each collection run, delete: ... `/tmp/screenshots/` ... Any `.json` temp files in workspace
Recommendation

Limit cleanup to known generated paths such as `/tmp/sm-collect/` and explicitly confirm before deleting any workspace files.

What this means

Following the setup prompt installs third-party packages and browser binaries into the local environment.

Why it was flagged

The browser tier depends on Playwright/Chromium and suggests installing them with an unpinned command if absent. This is not automatic, but it is an external dependency setup step.

Skill content
print("ERROR: playwright not installed. Run: pip install playwright && playwright install chromium")
Recommendation

Install dependencies from trusted package sources, consider pinning versions, and review the environment before running browser scraping.