Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

lastXdays

v1.0.2

Research and summarize what happened in the last N days (or a date range) about a topic, optionally using Reddit API and X ingestion via x-cli/API/archive wi...

0· 735·1 current·1 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for levineam/lastxdays.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "lastXdays" (levineam/lastxdays) from ClawHub.
Skill page: https://clawhub.ai/levineam/lastxdays
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install lastxdays

ClawHub CLI

Package manager switcher

npx clawhub@latest install lastxdays
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
!
Purpose & Capability
The name/description (summarize last N days about a topic) aligns with the instructions (web search + optional Reddit/X ingestion). However the SKILL.md repeatedly instructs running local node scripts (scripts/lastxdays_ingest.js, scripts/lastxdays_range.js) and using x-cli/local archives without those scripts or binaries being provided. That mismatch (an instruction-only skill that expects non-shipped helper scripts) is an ownership/coherence problem: either the skill should include the scripts or make clear they are optional/external.
!
Instruction Scope
Instructions explicitly tell the agent to read local credential/config files (~/.config/last30days/.env, ~/.config/x-cli/.env) and to examine a local archive path (~/clawd/data/x-archive/). They also tell the agent to run node scripts and x-cli if present. Reading those files and archives is sensitive and goes beyond pure web searching; the skill's metadata declared no required config paths, so the SKILL.md is instructing access to user-local files that were not declared up-front.
Install Mechanism
There is no install spec (instruction-only), so nothing will be written to disk by an installer. However the SKILL.md expects external tools (node, x-cli, optional uv tool install) and non-packaged scripts. That increases runtime fragility and the potential for the agent to attempt remote installs or to ask the user to install tools — behavior users should be aware of.
!
Credentials
The SKILL.md describes optional but sensitive environment variables for Reddit (client id/secret, refresh token or username/password) and X (bearer token). Requesting such credentials is proportionate if the user explicitly chooses Reddit/X ingestion, but the skill also instructs reading ~/.config/last30days/.env and other local credential files which were not declared in the skill metadata. That implicit file access and the inclusion of username/password as an allowed credential is a privacy risk that should be justified explicitly before providing secrets.
Persistence & Privilege
always:false and no install spec mean the skill does not force permanent presence or system-wide changes. It also does not claim to modify other skills. Still, runtime behavior may read local config/archives and attempt to invoke external CLIs (x-cli) or node scripts — so while it does not request elevated installation privileges, it does request potentially sensitive local reads at runtime.
What to consider before installing
Before installing or invoking this skill: (1) be aware it expects helper node scripts (scripts/lastxdays_ingest.js, lastxdays_range.js) and CLI tooling (node, x-cli) that are NOT included — ask the skill author to provide those scripts or confirm they are optional. (2) The skill may read local files (~/.config/last30days/.env, ~/.config/x-cli/.env) and a local X archive (~/clawd/data/x-archive/); do not expose credentials or sensitive files unless you trust the skill and have reviewed the code that will use them. (3) If you want Reddit/X ingestion, prefer scoped read-only tokens (OAuth refresh tokens or API tokens) rather than username/password in environment variables. (4) If you are uncomfortable granting local-file access or secrets, use this skill in web-only mode or ask for an explicit, minimal list of required credentials and for the missing scripts to be bundled. (5) Consider running the skill in a sandboxed environment or requesting the author to include and review the helper scripts before granting access.

Like a lobster shell, security has layers — review code before you run it.

latestvk9735bj194pn1rjap7wjq3t7ex81k1qf
735downloads
0stars
3versions
Updated 17h ago
v1.0.2
MIT-0

lastXdays Skill

Summarize what happened in the last N days (or a specific YYYY-MM-DD → YYYY-MM-DD range) about a user-provided topic.

Default behavior is web-first (web_search + selective web_fetch). If optional credentials/data are available, you may ingest Reddit via API and X via x-cli (preferred), API, or local archive, falling back to web search if unavailable.

Trigger Patterns

Activate when the user message contains any of:

  • lastXdays / lastxdays
  • last x days
  • A question like: "what happened in the last N days" (optionally followed by a topic)

Default Model

Default to sonnet (anthropic/claude-sonnet-4-6) when spawning as a subagent.

Use flash (openrouter/google/gemini-2.0-flash-001) only for simple single-source lookups (one topic, one platform, straightforward synthesis with no file reading required). Flash is unreliable for multi-step agentic work requiring tool chaining (search → fetch → read files → synthesize → report). When in doubt, use sonnet.

Input Parsing

Parse from the user message:

1) Date range (preferred if explicit)

If the user supplies a range like:

  • from 2026-01-10 to 2026-02-08
  • 2026-01-10 → 2026-02-08

Then:

  • start = YYYY-MM-DD
  • end = YYYY-MM-DD
  • Ignore N if both are present.

2) Days (N)

Otherwise, infer N:

  • Look for an integer N associated with the request, e.g.:
    • lastxdays 7 <topic>
    • 7 lastxdays <topic>
    • what happened in the last 14 days (about|re:) <topic>
  • Default: N = 30
  • Clamp: N = min(max(N, 1), 365)

3) Sources (optional)

Supported: web|reddit|x|all.

Accept any of:

  • for web / sources web
  • for reddit / sources reddit
  • for x / sources x
  • for all / sources all

If unspecified: sources = all.

4) Topic

  • The remaining text (after removing trigger words, N/range, and source phrases) is the topic.
  • If topic is empty/unclear, ask exactly one clarifying question and stop.

Date Range (freshness)

Use an inclusive range in local time:

  • freshness = start + "to" + end (e.g., 2026-01-10to2026-02-08)

Helper for “last N days”:

  • node scripts/lastxdays_range.js <N>

Optional non-web ingestion (Reddit/X)

Use this helper to ingest Reddit/X when possible:

  • node scripts/lastxdays_ingest.js --source=reddit|x --topic "..." --start YYYY-MM-DD --end YYYY-MM-DD --limit 40

The script attempts:

  • Reddit: official API via OAuth (if credentials exist), else returns fallback:true
  • X: x-cli search first (if installed/configured), then Twitter API v2 recent search (if bearer token + range <= ~7 days), then local archive at ~/clawd/data/x-archive/, else returns fallback:true

Required environment variables (if you want API mode):

  • Reddit:

    • REDDIT_CLIENT_ID
    • REDDIT_CLIENT_SECRET
    • either REDDIT_REFRESH_TOKEN (recommended) or REDDIT_USERNAME + REDDIT_PASSWORD
    • optional: REDDIT_USER_AGENT
  • X API (optional; only works for recent ranges on most tiers):

    • X_BEARER_TOKEN (also accepts TWITTER_BEARER_TOKEN)
  • x-cli (optional, preferred for agent use):

    • Install: uv tool install x-cli (or from source)
    • Configure credentials in ~/.config/x-cli/.env (supports shared setup with x-mcp)
    • If present, lastxdays_ingest.js uses it before raw API/archive for X search

Credentials loader:

  • Reads ~/.config/last30days/.env if present (does not hard-fail if missing)
  • Environment variables override .env values (file only fills blanks)

Research Procedure

  1. Compute start/end/freshness.

  2. For each requested source:

Web

  • Query: <topic>
  • Run web_search with freshness (count 5–8)
  • Optionally web_fetch 2–6 best links

Reddit

Preferred:

  • Run node scripts/lastxdays_ingest.js --source=reddit ...
  • If it returns fallback:false, treat returned items[] as “Notable links” (each has a Reddit permalink URL).
  • If items[] is empty / too small to be useful (e.g., <3), you may also run the web fallback to broaden coverage.

Fallback (if fallback:true):

  • Run web_search with query site:reddit.com/r <topic> and the same freshness

X

Preferred:

  • Run node scripts/lastxdays_ingest.js --source=x ...
  • If mode=x-cli, mode=api, or mode=archive, treat returned items[] as “Notable links” (each has a URL)
    • If mode=x-cli, note that X results came from local x-cli execution
    • If mode=archive, note that links come from the local X archive
  • If items[] is empty / too small to be useful (e.g., <3), you may also run the web fallback to broaden coverage.

Fallback (if fallback:true):

  • Run web_search with query site:x.com <topic> and the same freshness
  • Expect web_fetch to fail often on x.com; rely on snippets when needed
  1. Select and deduplicate links/items:
  • Prefer authoritative sources for Web
  • Prefer high-engagement or highly-informative posts for Reddit/X
  • Keep total links/items shown to ~10–20 max

Output Format (Markdown)

Title:

  • ## lastXdays — <N> days — <topic>
    • If an explicit range was used, you may replace <N> days with YYYY-MM-DD → YYYY-MM-DD.

Then include sections in this order:

  1. Date range used
  • YYYY-MM-DD → YYYY-MM-DD (and optionally the freshness string)
  1. Top themes
  • 3–7 bullets summarizing the dominant storylines/trends
  1. Notable links Group by platform in this order, including only platforms actually searched:
  • Web
  • Reddit
  • X

For each link/item:

  • Markdown link
  • One line: why it matters
  • If snippet-only (fetch failed/unavailable), say so
  1. What to follow up on
  • 3 copy/pasteable next searches

Smoke tests (local)

Date range helper:

  • node scripts/lastxdays_range.js 7

Reddit ingest (requires creds or it will return fallback=true):

  • node scripts/lastxdays_ingest.js --source=reddit --topic "OpenClaw security vulnerability CVE" --start 2026-02-07 --end 2026-02-08 --limit 20 --pretty

X ingest (x-cli if installed; else API if bearer token + <=7 days; else local archive; else fallback=true):

  • node scripts/lastxdays_ingest.js --source=x --topic "OpenClaw" --start 2026-02-07 --end 2026-02-08 --limit 20 --pretty

Optional x-cli direct smoke test:

  • x-cli -v -j tweet search "OpenClaw since:2026-02-07 until:2026-02-09" --max 20

Examples

  • lastxdays AI agents for web
  • last x days 10 bitcoin ETF flows
  • what happened in the last 7 days about OpenAI for reddit
  • 14 lastXdays Apple Vision Pro for web
  • lastxdays 30 OpenAI sources all
  • lastxdays from 2026-01-01 to 2026-01-15 about Anthropic sources reddit

Comments

Loading comments...