X Tweet Fetcher
AdvisoryAudited by Static analysis on Apr 30, 2026.
Overview
No suspicious patterns detected.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If invoked, the agent may scrape sites through mechanisms intended to evade bot or fingerprinting controls, which can create account, legal, or service-abuse risk for the user.
The skill’s advanced workflow is explicitly built around anti-detection browser automation and bypassing site protections, which is materially riskier than normal public API or webpage fetching.
Camofox is an anti-detection browser service ... It bypasses: - Cloudflare bot detection - Browser fingerprinting - JavaScript challenges
Use Camofox-backed features only with explicit approval and only where automated access is permitted; keep basic zero-dependency tweet fetching separate from bypass workflows.
If used, the skill can operate through the user’s SSH credentials and run code on another machine, which is a significant delegated privilege beyond simple web fetching.
The optional Sogou/WeChat search path can use the user’s SSH authority to copy and execute a Python script on a remote host.
host = ssh_host or os.environ.get("SOGOU_SSH_HOST") ... subprocess.run(["scp", ... local_path, f"{host}:{remote_path}"]) ... subprocess.run(["ssh", "-o", "ConnectTimeout=5", host, "python3", remote_path]Only use the SSH mode with a trusted, isolated host; require explicit user confirmation before invoking it and document exactly what remote code is copied and executed.
Using imported cookies may grant the agent access to a logged-in account session, and the visible instructions do not clearly bound cookie source, scope, retention, or outputs.
The skill advertises optional login-cookie use for some platforms, while the top-level framing emphasizes fetching without login or API keys and the provided registry requirements declare no credentials.
Zhihu / Xiaohongshu | ⚠️ | Needs cookie import for login
Avoid cookie-import features unless necessary; use narrowly scoped disposable sessions where possible and require explicit approval before any cookie-backed request.
A separate router or home-IP agent could execute queued commands or expose search terms outside the local OpenClaw session if this mode is enabled or misconfigured.
The skill can write commands to a router/VPS command-queue workflow, but the artifacts do not define authentication, authorization, or containment for that external agent.
Router polls VPS every minute, executes queued commands, pushes results back ... queue_file = os.environ.get("ROUTER_CMD_QUEUE", "/root/router-agent/cmd-queue") ... f.write(cmd)Use router-queue mode only in a trusted, isolated setup with clear authentication and auditing; do not allow autonomous use of this path.
Installing the optional Camofox component adds third-party code and a local browser service that can visit sites on the user’s behalf.
Advanced features depend on an external browser service/plugin that is not installed by this skill and is not part of the reviewed code bundle.
openclaw plugins install @askjo/camofox-browser ... git clone https://github.com/jo-inc/camofox-browser ... npm install && npm start
Inspect and trust the Camofox package separately before installing it; keep it stopped when not needed.
If the user adds these cron jobs, the skill can continue sampling tracked tweets on a schedule after the initial interaction.
The documentation includes cron examples for recurring monitoring; this is user-directed, but it creates persistent periodic activity if installed.
*/15 * * * * python3 tweet_growth_cli.py --run --fast ... 0 * * * * python3 tweet_growth_cli.py --run --normal
Only add cron jobs intentionally, review their frequency, and remove them when monitoring is no longer needed.
Tracked tweet IDs, labels, samples, or discovery data may remain on disk between sessions.
Growth tracking stores persistent local data and discovery cache files under the user’s home directory by default.
DATA_FILE = Path(os.environ.get("TWEET_GROWTH_DATA", Path.home() / ".tweet-growth" / "data.json")) ... DISCOVER_CACHE = ... Path.home() / ".tweet-growth" / "discover_cache.json"Review or delete the .tweet-growth data directory when no longer needed, and avoid storing sensitive labels or private monitoring targets there.
