Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Keyapi Tiktok Influencer Discovery

v1.0.0

Discover, profile, and deeply analyze TikTok influencers — from keyword-based search to multi-dimensional performance intelligence covering follower trends,...

1· 89·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for lycici/keyapi-tiktok-influencer-discovery.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Keyapi Tiktok Influencer Discovery" (lycici/keyapi-tiktok-influencer-discovery) from ClawHub.
Skill page: https://clawhub.ai/lycici/keyapi-tiktok-influencer-discovery
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Required env vars: KEYAPI_TOKEN
Required binaries: node
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install keyapi-tiktok-influencer-discovery

ClawHub CLI

Package manager switcher

npx clawhub@latest install keyapi-tiktok-influencer-discovery
Security Scan
VirusTotalVirusTotal
Benign
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
Name/description match the observed behavior: the skill calls KeyAPI's MCP server, lists/inspects tools and invokes tool endpoints. Requested artifacts (node, KEYAPI_TOKEN) are appropriate and proportional.
Instruction Scope
SKILL.md instructs running npm install and node scripts/run.js to call MCP tools. The runtime reads/writes a local .env and a cache directory (.keyapi-cache) and can save outputs to files; these filesystem actions are limited to the skill directory and are consistent with the tool runner's functionality. Note: the tool will prompt for and persist KEYAPI_TOKEN to a .env file if not set, which is expected but worth knowing.
Install Mechanism
No arbitrary download/install is used. Dependencies come from npm (package.json lists @modelcontextprotocol/sdk) and the SKILL.md advises running npm install. This is a common, expected install mechanism.
Credentials
Only KEYAPI_TOKEN (primaryEnv) is required. No unrelated credentials or system secrets are requested. The code optionally accepts KEYAPI_SERVER_URL for server override — reasonable for testing/debugging.
Persistence & Privilege
always is false and the skill does not attempt to modify other skills or global agent settings. It persists the token and cache in the skill directory only, which is within its expected scope.
Assessment
This skill appears to do what it claims: it calls the KeyAPI MCP service and needs a KEYAPI_TOKEN and Node.js. Before installing: (1) be aware the runner will read/write a .env file in the skill directory and create a .keyapi-cache — if you prefer not to store the token on disk, set KEYAPI_TOKEN in your environment instead of letting the tool persist it; (2) npm install will fetch packages from the public registry — review dependencies if you have strong supply-chain concerns; (3) the tool communicates with https://mcp.keyapi.ai (and may convert image URLs served via an EchoSell CDN host) — only provide a token scoped to the minimum privileges you need and avoid reusing highly privileged tokens. If you want extra assurance, inspect scripts/run.js locally (already included) and create a KeyAPI token with restricted scope for this skill.
scripts/run.js:52
Environment variable access combined with network send.
!
scripts/run.js:37
File read combined with network send (possible exfiltration).
Patterns worth reviewing
These patterns may indicate risky behavior. Check the VirusTotal and OpenClaw results above for context-aware analysis before installing.

Like a lobster shell, security has layers — review code before you run it.

Runtime requirements

🔍 Clawdis
Binsnode
EnvKEYAPI_TOKEN
Primary envKEYAPI_TOKEN
latestvk9745x5z89qf97y4stff8prrj9843phg
89downloads
1stars
1versions
Updated 3w ago
v1.0.0
MIT-0

keyapi-tiktok-influencer-discovery

Discover, profile, and deeply analyze TikTok influencers — from keyword-based search to multi-dimensional performance intelligence.

This skill powers end-to-end TikTok influencer research using the KeyAPI MCP service. It enables you to find creators by keyword or region, retrieve their profile and performance metrics, analyze historical growth trajectories, and benchmark them against ranking data — all through a single, orchestrated workflow.

Use this skill when you need to:

  • Identify high-performing influencers for brand collaborations or affiliate campaigns
  • Audit a creator's follower growth, engagement rate, and live-stream GMV history
  • Build ranked shortlists and compare multiple creators across key performance dimensions
  • Track historical trends for competitive intelligence and market positioning

author: KeyAPI license: MIT repository: https://github.com/EchoSell/keyapi-skills

Prerequisites

RequirementDetails
KEYAPI_TOKENA valid API token from keyapi.ai. If you don't have one, register at the site to obtain your free token. Set it as an environment variable: export KEYAPI_TOKEN=your_token_here
Node.jsv18 or higher
DependenciesRun npm install in the skill directory to install @modelcontextprotocol/sdk

author: KeyAPI license: MIT repository: https://github.com/EchoSell/keyapi-skills

MCP Server Configuration

All tool calls in this skill target the KeyAPI MCP server:

Server URL : https://mcp.keyapi.ai
Auth Header: Authorization: Bearer $KEYAPI_TOKEN

Setup (one-time):

# 1. Install dependencies
npm install

# 2. Set your API token (get one free at https://keyapi.ai/)
export KEYAPI_TOKEN=your_token_here

# 3. List all available tools to verify the connection
node scripts/run.js --list-tools

author: KeyAPI license: MIT repository: https://github.com/EchoSell/keyapi-skills

Analysis Scenarios

Select one or more nodes based on the research objective. Multiple nodes can be combined for cross-dimensional analysis.

User NeedNode(s)Best For
Find influencers by keyword, category, or regionsearch_influencersInitial discovery, broad prospecting
Verify an influencer's identity and resolve IDsget_influencer_detailID resolution (user_id + unique_id), profile snapshot
Filter influencers with analytics (ER, GMV, followers, sales)influencer_list_analyticsData-driven shortlisting from large datasets
Full multi-dimensional performance auditinfluencer_detail_analyticsDeep-dive due diligence on one or more creators
Analyze historical growth trends over timeinfluencer_trends_analyticsGrowth velocity, follower trajectory, trend analysis
Review video content performance historyinfluencer_videos_analyticsContent strategy benchmarking, top-video analysis
Evaluate live-stream commerce history (GMV, viewers)influencer_livestreams_analyticsLive commerce capability assessment
Examine promoted product portfolio and salesinfluencer_products_analyticsBrand-fit assessment, niche/category alignment
Competitive ranking by followers, GMV, or ERinfluencer_ranking_analyticsLeaderboard analysis, category benchmarks
Retrieve latest published videos with engagement statsget_influencer_videosRecent content monitoring, freshness check
Sample an influencer's follower listget_influencer_followersAudience quality sampling
Explore the accounts an influencer followsget_influencer_followingNetwork and affinity analysis
Geographic breakdown of audience distributionget_influencer_regionGeo-targeting fit for regional campaigns
Generate a shareable profile QR codeget_influencer_qr_codeMarketing material assets
Key milestone and achievement historyget_influencer_milestonesGrowth storytelling, historical highlights

author: KeyAPI license: MIT repository: https://github.com/EchoSell/keyapi-skills

Workflow

Step 1 — Identify Analysis Targets and Select Nodes

Clarify the user's objective and map it to one or more nodes from the table above. Typical entry points:

  • Keyword discovery: Start with search_influencers, then optionally deepen with influencer_list_analytics for richer filtering.
  • Direct profile lookup: Use get_influencer_detail with a known unique_id (@handle).
  • Performance deep-dive: Combine influencer_detail_analytics + influencer_trends_analytics + influencer_videos_analytics.
  • Live commerce evaluation: Use influencer_livestreams_analytics + influencer_products_analytics.
  • Competitive ranking: Use influencer_ranking_analytics with appropriate category/region filters.

⚠️ Critical: Resolving user_id vs. unique_id

Two distinct identifier types are used across endpoints:

  • unique_id — the user's public @handle (e.g., charlidamelio). User-visible, mutable.
  • user_id — TikTok's permanent, immutable numeric UID assigned to each account.

When a workflow requires nodes that accept different identifier types, always call get_influencer_detail first using the unique_id to obtain both identifiers before proceeding.

Step 2 — Retrieve API Schema

Before calling any node, inspect its input schema to confirm required parameters, data types, and valid enumeration values:

node scripts/run.js --schema <tool_name>

# Example
node scripts/run.js --schema influencer_list_analytics

For analytics nodes, pay particular attention to filter parameters (region, category, date range, follower range, etc.) and confirm the expected page_num/page_size fields.

Step 3 — Call APIs and Cache Results Locally

Execute the required tool calls and persist all responses to the local cache to enable result reuse across sessions and avoid redundant API calls.

Calling a tool (using scripts/run.js):

# Single page call — result is cached automatically
node scripts/run.js --tool <tool_name> --params '<json_args>' --pretty

# Fetch all pages at once (auto-pagination)
node scripts/run.js --tool <tool_name> --params '<json_args>' --all-pages --page-size 50

# Force a fresh call, skip cache
node scripts/run.js --tool <tool_name> --params '<json_args>' --no-cache

Example — search influencers:

node scripts/run.js --tool search_influencers \
  --params '{"keyword":"fitness","region":"US"}' --pretty

Example — filter influencers with analytics (all pages):

node scripts/run.js --tool influencer_list_analytics \
  --params '{"region":"US","influencer_category_name":"Fitness"}' --all-pages

Example — get influencer's latest videos (cursor-based):

# First page: offset=0
node scripts/run.js --tool get_influencer_videos \
  --params '{"unique_id":"charlidamelio","offset":"0"}' --pretty
# Next page: use max_cursor value from previous response as offset

Pagination for analytics endpoints:

All *_analytics endpoints use page_num (1-indexed) and page_size (max 10). run.js injects these automatically if not specified. Use --all-pages to let run.js iterate all pages and merge the results.

--page-num 1  --page-size 10   → first page (default)
--all-pages                    → all pages merged into one result

Note: get_influencer_videos, get_influencer_followers, get_influencer_following use cursor-based pagination via an offset parameter — not page_num/page_size. Pass "offset":"0" to start, then use the max_cursor (or min_time) value from the response as the next offset.

Cache directory structure:

.keyapi-cache/
└── influencers/
    └── {unique_id}/
        ├── detail.json                  # get_influencer_detail
        ├── analytics.json               # influencer_detail_analytics
        ├── trends.json                  # influencer_trends_analytics
        ├── videos_analytics.json        # influencer_videos_analytics
        ├── livestreams_analytics.json   # influencer_livestreams_analytics
        ├── products_analytics.json      # influencer_products_analytics
        ├── latest_videos.json           # get_influencer_videos
        ├── followers.json               # get_influencer_followers
        ├── following.json               # get_influencer_following
        ├── region.json                  # get_influencer_region
        ├── qr_code.json                 # get_influencer_qr_code
        └── milestones.json              # get_influencer_milestones
└── searches/
    └── influencers/
        └── {md5_of_query_params}.json   # search_influencers, influencer_list_analytics
└── rankings/
    └── influencers_{params_hash}.json   # influencer_ranking_analytics

Cache-first policy:

Before every API call, check whether a cached result already exists for the given entity and node. If a valid cache file exists, load from disk and skip the API call.

Cover image processing:

After each API call, scan all response image URLs. If any URL's host matches echosell-images.tos-ap-southeast-1.volces.com, collect those URLs and call batch_download_cover_images in a single batch request. Replace the original URLs in your working dataset with the converted URLs returned by this node.

Step 4 — Synthesize and Report Findings

After collecting all API responses (from cache or live calls), produce a structured research report:

  1. Creator Profile Summary — Name, @handle, follower count, engagement rate, primary niche, and operating region.
  2. Performance Analysis — Follower growth curve, average video views, engagement benchmarks, and live-stream GMV history.
  3. Content Strategy Insights — Top-performing video themes, posting cadence, product promotion patterns, and audience interaction quality.
  4. Competitive Positioning — Ranking within category/region, peer comparisons when analyzing multiple creators.
  5. Actionable Recommendations — Best fit use cases (brand sponsorship, affiliate, live commerce), audience-campaign alignment, risk signals (follower authenticity, trend consistency).

Cross-reference multiple data sources where available — for example, correlate influencer_trends_analytics with influencer_livestreams_analytics to identify whether GMV peaks align with follower growth events.

author: KeyAPI license: MIT repository: https://github.com/EchoSell/keyapi-skills

Common Rules

RuleDetail
PaginationAll *_analytics endpoints use page_num (starts at 1) and page_size. Never use page 0.
Cover imagesBatch-convert all image URLs from echosell-images.tos-ap-southeast-1.volces.com via batch_download_cover_images before storing or displaying.
Success checkcode = 0 → success. Any other value → failure. Always check the response code before processing data.
Retry on 500If code = 500, retry the identical request once after a brief pause before reporting the error.
Cache firstAlways check the local .keyapi-cache/ directory before issuing a live API call.
ID resolutionWhen a workflow requires both user_id and unique_id, call get_influencer_detail first with the unique_id to resolve both.

author: KeyAPI license: MIT repository: https://github.com/EchoSell/keyapi-skills

Error Handling

CodeMeaningAction
0SuccessContinue workflow normally
400Bad request — invalid or missing parametersValidate input against the tool schema; correct and retry
401Unauthorized — token missing or expiredConfirm KEYAPI_TOKEN is set correctly; visit keyapi.ai to renew
403Forbidden — plan quota exceeded or feature restrictedReview plan limits at keyapi.ai
404Resource not found — influencer not indexed or ID incorrectVerify unique_id / user_id; try search_influencers to locate the creator
429Rate limit exceededWait 60 seconds, then retry
500Internal server errorRetry once after 2–3 seconds; if it persists, log the full request and response and skip this node
Other non-0Unexpected errorLog the full response body and surface the error message to the user

Comments

Loading comments...