Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Keyapi Reddit Content Analytics

v1.0.0

Explore and analyze Reddit content at scale — retrieve post details (single, batch, or large batch), comments, sub-comment threads, user activity, and curate...

0· 7·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description (Reddit content analytics) aligns with the code and SKILL.md: the tool calls a KeyAPI MCP server to fetch Reddit posts, comments, user activity and feeds. Node and a KeyAPI token are reasonable requirements.
Instruction Scope
The SKILL.md runtime instructions are focused on calling KeyAPI MCP tools and match the included runner script. Minor inconsistencies: the runner's default --platform is set to 'tiktok' (code) while the skill is Reddit-focused (documentation) and SKILL.md documents an optional KEYAPI_SERVER_URL override that is not listed in registry requires.env.
Install Mechanism
No packaged install; runtime requires running npm install which pulls a single, named npm dependency (@modelcontextprotocol/sdk). This is a moderate but expected install step for a Node-based tool; there are no suspicious external downloads or URL-based installers.
Credentials
The skill declares a single primary credential (KEYAPI_TOKEN) which fits the purpose. Note: the code also supports/reads KEYAPI_SERVER_URL as an optional override (documented in SKILL.md) but that variable was not declared in the minimal registry env list. The runner will persist the token into a local .env file if entered interactively — this local persistence may be surprising and should be considered before use.
Persistence & Privilege
always:false (normal). The script writes local files within the skill directory: .env (when prompting for token) and .keyapi-cache for cached API results. It does not request system-wide privileges or modify other skills' config, but it does create and read local files which may contain API responses and the token.
Assessment
This skill appears to do what it says: call KeyAPI's MCP Reddit tools using a KEYAPI_TOKEN. Before installing, consider: 1) The tool will prompt to save your KEYAPI_TOKEN into a .env file in the skill directory (and will create a .keyapi-cache directory). If you store sensitive tokens there, ensure the directory permissions are appropriate or avoid saving the token to disk. 2) The package installs one npm dependency (@modelcontextprotocol/sdk) — review that dependency if you require stricter supply-chain assurance. 3) There are small inconsistencies (runner default platform = 'tiktok', and SKILL.md references KEYAPI_SERVER_URL which isn't declared in registry metadata) — not malicious but worth noting. 4) If you don’t trust the publisher, inspect scripts/run.js (included) or run it in a sandbox/container with a limited token (or a token that can be revoked) before using it with production credentials.
scripts/run.js:52
Environment variable access combined with network send.
!
scripts/run.js:37
File read combined with network send (possible exfiltration).
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

latestvk976tpw30axv5qnb15nw0h52qn8434sd

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

📰 Clawdis
Binsnode
EnvKEYAPI_TOKEN
Primary envKEYAPI_TOKEN

SKILL.md

keyapi-reddit-content-analytics

Explore and analyze Reddit content at scale — from individual post deep-dives and threaded comment traversal to curated feed monitoring and user activity research.

This skill provides comprehensive Reddit content intelligence using the KeyAPI MCP service. It enables retrieval of post details (single or batch), comment threads with sub-comment traversal, user-published posts and comments, and curated feed content across home, popular, games, news, subreddit, and community highlight feeds — all through a unified, cache-first workflow.

Use this skill when you need to:

  • Retrieve full details for one or more Reddit posts by ID
  • Traverse comment threads including nested sub-comments
  • Analyze a specific user's post and comment history
  • Monitor trending content across Reddit's home, popular, games, and news feeds
  • Explore subreddit-specific content streams
  • Surface community highlights and pinned content from specific subreddits

author: KeyAPI license: MIT repository: https://github.com/EchoSell/keyapi-skills

Prerequisites

RequirementDetails
KEYAPI_TOKENA valid API token from keyapi.ai. Register at the site to obtain your free token. Set it as an environment variable: export KEYAPI_TOKEN=your_token_here
Node.jsv18 or higher
DependenciesRun npm install in the skill directory to install @modelcontextprotocol/sdk

author: KeyAPI license: MIT repository: https://github.com/EchoSell/keyapi-skills

MCP Server Configuration

All tool calls in this skill target the KeyAPI Reddit MCP server:

Server URL : https://mcp.keyapi.ai/reddit/mcp
Auth Header: Authorization: Bearer $KEYAPI_TOKEN

Setup (one-time):

# 1. Install dependencies
npm install

# 2. Set your API token (get one free at https://keyapi.ai/)
export KEYAPI_TOKEN=your_token_here

# 3. List all available tools to verify the connection
node scripts/run.js --platform reddit --list-tools

author: KeyAPI license: MIT repository: https://github.com/EchoSell/keyapi-skills

Analysis Scenarios

User NeedNode(s)Best For
Full details for a single postfetch_single_reddit_post_detailsPost audit, content analysis, comment context
Batch details for up to 5 postsfetch_reddit_post_details_in_batch_max_5Small-scale comparative analysis
Batch details for up to 30 postsfetch_reddit_post_details_in_large_batch_max_30Large-scale content harvesting
Top-level comments on a postfetch_reddit_app_post_commentsSentiment analysis, community reaction
Sub-comments / nested repliesfetch_reddit_app_comment_replies_sub-commentsDeep thread traversal, reply chain analysis
Posts published by a userfetch_user_postsUser content audit, posting behavior
Comments published by a userfetch_user_commentsUser engagement patterns, opinion tracking
Reddit home feed (personalized)fetch_reddit_app_home_feedTrending content monitoring
Popular / trending feedfetch_reddit_app_popular_feedViral content discovery, trend detection
Gaming community feedfetch_reddit_app_games_feedGaming trend monitoring
News feedfetch_reddit_app_news_feedCurrent events, news discussion tracking
Subreddit content streamfetch_reddit_app_subreddit_feedCommunity-specific content monitoring
Community highlights and pinned contentfetch_reddit_app_community_highlightsFeatured posts, important announcements

author: KeyAPI license: MIT repository: https://github.com/EchoSell/keyapi-skills

Workflow

Step 1 — Identify Analysis Targets and Select Nodes

Clarify the research objective and map it to one or more nodes. Typical entry points:

  • Post research: Use fetch_single_reddit_post_details for one post, or batch endpoints for multiple.
  • Comment thread traversal: Use fetch_reddit_app_post_comments for top-level comments, then fetch_reddit_app_comment_replies_sub-comments for nested replies when more.cursor is present.
  • User activity audit: Combine fetch_user_posts + fetch_user_comments for a full activity profile.
  • Feed monitoring: Choose the appropriate feed endpoint based on content category.
  • Subreddit deep-dive: Use fetch_reddit_app_subreddit_feed + fetch_reddit_app_community_highlights.

Critical: Reddit ID Type Prefixes

The Reddit APP API requires type prefixes on all IDs — this is mandatory:

  • Post IDs: must use t3_ prefix (e.g., t3_1ojnh50)
  • Comment IDs: must use t1_ prefix (e.g., t1_abcd123)

Passing a bare ID without the prefix will result in an error. Always include the prefix.

Sub-comment Traversal

When a comment node in fetch_reddit_app_post_comments response contains a more.cursor field, it indicates nested replies exist. Use fetch_reddit_app_comment_replies_sub-comments with:

  • post_id: the parent post ID (with t3_ prefix)
  • cursor: the value from more.cursor (format: commenttree:ex:(xxx))

Path to cursor: $.data.postInfoById.commentForest.trees[*].more.cursor

need_format Parameter

Most endpoints accept an optional need_format boolean. Set to true to receive sanitized/cleaned response data. Defaults to false (raw data). Use true when downstream processing requires cleaner output.

Step 2 — Retrieve API Schema

Before calling any node, inspect its input schema to confirm required parameters and available options:

node scripts/run.js --platform reddit --schema <tool_name>

# Examples
node scripts/run.js --platform reddit --schema fetch_single_reddit_post_details
node scripts/run.js --platform reddit --schema fetch_reddit_app_post_comments

Step 3 — Call APIs and Cache Results Locally

Execute tool calls and persist responses to the local cache to avoid redundant API calls.

Calling a tool:

# Single call with pretty output
node scripts/run.js --platform reddit --tool <tool_name> \
  --params '<json_args>' --pretty

# Force fresh data, skip cache
node scripts/run.js --platform reddit --tool <tool_name> \
  --params '<json_args>' --no-cache --pretty

Example — get single post details:

node scripts/run.js --platform reddit --tool fetch_single_reddit_post_details \
  --params '{"post_id":"t3_1ojnh50"}' --pretty

Example — batch fetch up to 5 posts:

node scripts/run.js --platform reddit --tool fetch_reddit_post_details_in_batch_max_5 \
  --params '{"post_ids":"t3_1ojnh50,t3_1ok432f,t3_1nwil8j"}' --pretty

Example — get post comments:

node scripts/run.js --platform reddit --tool fetch_reddit_app_post_comments \
  --params '{"post_id":"t3_1ojnvca","sort_type":"TOP"}' --pretty

Example — get sub-comments using cursor:

node scripts/run.js --platform reddit --tool "fetch_reddit_app_comment_replies_sub-comments" \
  --params '{"post_id":"t3_1qmup73","cursor":"commenttree:ex:(RjiJd","sort_type":"CONFIDENCE"}' --pretty

Example — get popular feed:

node scripts/run.js --platform reddit --tool fetch_reddit_app_popular_feed \
  --params '{"sort":"HOT","time":"WEEK"}' --pretty

Example — get subreddit feed:

node scripts/run.js --platform reddit --tool fetch_reddit_app_subreddit_feed \
  --params '{"subreddit_name":"technology","sort":"HOT"}' --pretty

Example — get user posts:

node scripts/run.js --platform reddit --tool fetch_user_posts \
  --params '{"username":"spez","sort":"NEW"}' --pretty

Pagination:

Endpoint groupPagination parameterNotes
Feed endpoints (home, popular, games, news, subreddit)after (cursor string)Pass after value from previous response
fetch_reddit_app_post_commentsafter (cursor string)Found in last comment node
fetch_user_posts, fetch_user_commentsafter (cursor string)Pass cursor from previous response
fetch_reddit_app_comment_replies_sub-commentscursor (from more.cursor)Use cursor from comment node, not after
Batch post endpointsNo pagination; pass all IDs in one call

Batch endpoint limits:

EndpointMax posts per call
fetch_reddit_post_details_in_batch_max_55
fetch_reddit_post_details_in_large_batch_max_3030

Exceeding these limits returns an error. Split larger sets across multiple calls.

Cache directory structure:

.keyapi-cache/
└── YYYY-MM-DD/
    ├── fetch_single_reddit_post_details/
    │   └── {params_hash}.json
    ├── fetch_reddit_post_details_in_batch_max_5/
    │   └── {params_hash}.json
    ├── fetch_reddit_post_details_in_large_batch_max_30/
    │   └── {params_hash}.json
    ├── fetch_reddit_app_post_comments/
    │   └── {params_hash}.json
    ├── fetch_reddit_app_comment_replies_sub-comments/
    │   └── {params_hash}.json
    ├── fetch_user_posts/
    │   └── {params_hash}.json
    ├── fetch_user_comments/
    │   └── {params_hash}.json
    ├── fetch_reddit_app_home_feed/
    │   └── {params_hash}.json
    ├── fetch_reddit_app_popular_feed/
    │   └── {params_hash}.json
    ├── fetch_reddit_app_games_feed/
    │   └── {params_hash}.json
    ├── fetch_reddit_app_news_feed/
    │   └── {params_hash}.json
    ├── fetch_reddit_app_subreddit_feed/
    │   └── {params_hash}.json
    └── fetch_reddit_app_community_highlights/
        └── {params_hash}.json

Cache-first policy:

Before every API call, check whether a cached result already exists for the given parameters. If a valid cache file exists, load from disk and skip the API call.

Step 4 — Synthesize and Report Findings

After collecting all API responses, produce a structured content intelligence report:

For post analysis:

  1. Post Overview — Title, subreddit, author, score, upvote ratio, comment count, post date, flair.
  2. Engagement Metrics — Score distribution, comment volume, award count, cross-post activity.
  3. Comment Sentiment — Top comment themes, sentiment distribution, notable replies.
  4. Thread Structure — Depth of discussion, sub-comment density, most-engaged branches.

For feed monitoring:

  1. Trending Topics — Top posts by score, common themes, subreddit distribution.
  2. Content Patterns — Media type breakdown (text, image, video, link), posting cadence.
  3. Community Signals — Subreddits driving the most engagement, emerging discussion topics.

For user activity:

  1. Activity Profile — Post frequency, comment frequency, active subreddits.
  2. Content Themes — Recurring topics, subreddit focus areas.
  3. Engagement Style — Comment length patterns, upvote/downvote behavior where available.

author: KeyAPI license: MIT repository: https://github.com/EchoSell/keyapi-skills

Common Rules

RuleDetail
ID prefixesAlways include type prefixes: t3_ for post IDs, t1_ for comment IDs. Bare IDs will fail.
Batch limitsfetch_reddit_post_details_in_batch_max_5 accepts max 5 IDs; fetch_reddit_post_details_in_large_batch_max_30 accepts max 30. Split larger sets across multiple calls.
Sub-comment traversalWhen a comment node has more.cursor, call fetch_reddit_app_comment_replies_sub-comments with that cursor to retrieve nested replies.
need_formatSet to true for sanitized output. Defaults to false. Use true when cleaner data is needed for downstream processing.
Feed paginationUse after cursor from the previous response to fetch the next page of feed content.
Success checkcode = 0 → success. Any other value → failure. Always check the response code before processing data.
Retry on 500If code = 500, retry the identical request up to 3 times with a 2–3 second pause between attempts before reporting the error.
Cache firstAlways check the local .keyapi-cache/ directory before issuing a live API call.

author: KeyAPI license: MIT repository: https://github.com/EchoSell/keyapi-skills

Error Handling

CodeMeaningAction
0SuccessContinue workflow normally
400Bad request — invalid or missing parametersValidate ID format (check t3_/t1_ prefixes); verify batch size limits
401Unauthorized — token missing or expiredConfirm KEYAPI_TOKEN is set correctly; visit keyapi.ai to renew
403Forbidden — plan quota exceeded or feature restrictedReview plan limits at keyapi.ai
404Resource not found — post or comment may have been deletedVerify the post ID; deleted or removed content may return empty results
429Rate limit exceededWait 60 seconds, then retry
500Internal server errorRetry up to 3 times with a 2–3 second pause; if it persists, log the full request and response and skip this node
Other non-0Unexpected errorLog the full response body and surface the error message to the user

Files

3 total
Select a file
Select a file to preview.

Comments

Loading comments…