Social Media Data Collector
ReviewAudited by ClawScan on May 15, 2026.
Overview
The skill is coherent and purpose-aligned for collecting social media metrics, but it uses credentials, external APIs, browser scraping, and Feishu Bitable updates that users should scope carefully.
This skill appears safe for its stated purpose if you intend to collect social media metrics and update a Feishu Bitable. Before using it, confirm the exact URLs, table ID, and record IDs, use least-privilege TikHub and Feishu credentials, and restrict cleanup to files created by this run.
Findings (5)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
Using this skill with Feishu credentials can grant the script authority to update records in the specified Bitable app/table.
The script accepts a Feishu app secret, obtains a tenant access token, and uses it for Bitable access. This is purpose-aligned but sensitive credential handling.
parser.add_argument("--app-secret", default=os.environ.get("FEISHU_APP_SECRET") ... token = get_token(args.app_id, args.app_secret)Use a least-privilege Feishu app, store secrets securely, verify the app token and table ID before running, and rotate credentials if they are exposed.
Incorrect table IDs, record IDs, or extracted metrics could overwrite fields in the target Bitable records.
The script performs batch updates to Feishu Bitable records based on the provided results file. This matches the skill purpose, but it is a mutating external-account action.
url = f"https://open.feishu.cn/open-apis/bitable/v1/apps/{app_token}/tables/{table_id}/records/batch_update"Review the results JSON and target record IDs before writing, keep a backup/export of the table, and run updates only after confirming the target table.
Selected social media URLs or content identifiers, along with the TikHub authorization token, are sent to a third-party API provider.
The API tier sends user-provided social media URLs to the TikHub API. This is disclosed and expected for API-based metric collection, but it is an external provider data flow.
TIKHUB_BASE = "https://api.tikhub.io" ... api_url = f"{TIKHUB_BASE}{config['path']}?{config['param']}={url}"Only submit URLs that are appropriate for third-party processing and use a scoped TikHub token where possible.
If interpreted too broadly, cleanup could remove unrelated temporary JSON files or other files the user still needs.
The cleanup guidance is intended to remove generated artifacts, but the workspace JSON wording is broader than a fixed generated-file list.
After each collection run, delete: ... `/tmp/screenshots/` ... Any `.json` temp files in workspace
Limit cleanup to known generated paths such as `/tmp/sm-collect/` and explicitly confirm before deleting any workspace files.
Following the setup prompt installs third-party packages and browser binaries into the local environment.
The browser tier depends on Playwright/Chromium and suggests installing them with an unpinned command if absent. This is not automatic, but it is an external dependency setup step.
print("ERROR: playwright not installed. Run: pip install playwright && playwright install chromium")Install dependencies from trusted package sources, consider pinning versions, and review the environment before running browser scraping.
