Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Sniplink

v3.0.0

OpenClaw skill. One-shot URL saver for tools and services discovered on X, GitHub, or anywhere. Drop a link, get it categorized, tagged, and stored — no fric...

0· 134·0 current·0 all-time
byJahfali-dev@almohalhel1408

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for almohalhel1408/sniplink.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Sniplink" (almohalhel1408/sniplink) from ClawHub.
Skill page: https://clawhub.ai/almohalhel1408/sniplink
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Canonical install target

openclaw skills install almohalhel1408/sniplink

ClawHub CLI

Package manager switcher

npx clawhub@latest install sniplink
Security Scan
Capability signals
CryptoCan make purchasesRequires OAuth token
These labels describe what authority the skill may exercise. They are separate from suspicious or malicious moderation verdicts.
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The declared purpose (one-shot URL saver) matches the instructions: extracting metadata from GitHub, tweets, and web pages and presenting them for user approval. However the SKILL.md repeatedly instructs use of platform tools and CLIs (e.g., 'gh api', 'curl' to fxtwitter, 'web_fetch', 'browser_navigate', 'web_search') while the registry metadata lists no required binaries or credentials — a mild mismatch. It's plausible these are built-in agent tools, but the skill does not declare that dependency explicitly.
Instruction Scope
Instructions stay largely within the stated purpose and explicitly require user confirmation before saving. Positive boundaries are stated (do not scrape behind logins/paywalls, respect robots.txt). Two things to flag: (1) the skill directs network calls to a third-party proxy API (https://api.fxtwitter.com) which will receive tweet URLs/text — this is expected for tweet extraction but is a privacy/exfiltration surface the user should know about; (2) the instructions say 'save to database' but do not specify where that database lives or who can read it.
Install Mechanism
No install spec or code files — lowest technical risk. Nothing is downloaded or written by an installer step according to the registry data.
Credentials
The skill requests no env vars or credentials, which is consistent with the absence of declared secrets. But it references GitHub API usage via 'gh api' (which in practice can use GH tokens/config) and external HTTP calls; the SKILL.md does not explain whether it will rely on any existing agent credentials or require the user to supply GH/Twitter credentials. This ambiguity should be clarified.
Persistence & Privilege
always:false and user-invocable are appropriate. The main open question is persistence: the skill saves data to 'database' on user approval but does not say where (agent-local storage, cloud service owned by skill author, or user-owned storage). That gap affects who can access saved links and how long they're retained.
What to consider before installing
This skill generally does what it says (extracts and saves link metadata) but before installing ask the author/platform these questions: 1) Which runtime tools does it expect to exist (gh, curl, browser_navigate, web_fetch, web_search)? 2) Where are saved links stored (local agent storage, OpenClaw cloud, third‑party DB)? Who can read them and what is the retention policy? 3) The skill calls a third‑party fxtwitter API for tweets — are you comfortable that tweet text/URLs will be sent to that service? 4) Will it ever require GitHub or other credentials from you (GH_TOKEN, etc.)? If answers are unclear or you cannot verify the storage/third‑party endpoints, treat this as higher risk and avoid installing until clarified.

Like a lobster shell, security has layers — review code before you run it.

latestvk975shp4zz8jwxncgc074yzq5984e5q1
134downloads
0stars
5versions
Updated 2w ago
v3.0.0
MIT-0

SnipLink — The ADHD-Friendly URL Saver

Who It's For

You found something cool. You want it saved now — before you forget, before you lose the tab, before the momentum dies.

SnipLink saves it instantly: title, description, category, tags, social links. Done in seconds. No multi-step forms, no "where should I put this" paralysis.

Trigger

Use this skill when:

  1. User shares a URL and wants it saved — "save this", "remember this", "add to my stash"
  2. User shares an X/Twitter link — detect tweet content, extract the real target, and save it
  3. User asks "what tools do I have for X" or wants to search their saved links
  4. User is brainstorming and needs quick tool suggestions by tag or category

Zero friction. One URL in, clean record out. Confirm once, forget about it.

Workflow

1. Saving a Link (One-Shot)

Step 1: Detect source type

  • X/Twitter URL (x.com, twitter.com) → go to X/Twitter Pipeline (see below)
  • GitHub.com repo URL → use gh api (structured data, no scraping)
  • All other URLs → use web_fetch

Step 2: Extract info

For GitHub repos, extract:

  • Repo name, description, primary language, star count, license, topics/tags, last updated, owner

For all other pages:

  • web_fetch → title, meta description, pricing, features
  • Skip if behind login, paywall, or CAPTCHA

Step 3: Auto-categorize

  • AI/ML — AI services, LLMs, machine learning
  • Development — Coding tools, APIs, frameworks, testing
  • Productivity — Task management, notes, workflow automation
  • Marketing — SEO, social media, ads, content
  • Design — Graphics, UI/UX, video, prototyping
  • Finance — Billing, accounting, payments
  • Communication — Messaging, email, calls, CRM
  • Data — Analytics, databases, visualization
  • Other — Anything else

Step 4: Auto-generate tags

  • Extract from description, GitHub topics, or page keywords
  • Common: free, paid, api, no-code, open-source, mobile, cloud, etc.

Step 5: Social media lookup

  • web_search tool name + "LinkedIn" / "Twitter"
  • Store URLs if found

Step 6: Present for approval (MANDATORY)

  • Show the user a clean summary of what was extracted:
    • Title, description, category, tags, price
    • Source (direct URL / tweet / GitHub)
  • Ask: "Save this? (yes / no / edit)"
  • If user says no → discard, ask if they want to modify
  • If user says edit → let them adjust fields before saving
  • If user says yes → save to database
  • NEVER save without user seeing the extracted data first

X/Twitter Pipeline

Trigger: User shares an x.com or twitter.com URL and wants it saved.

Step 1: Extract tweet content (use fxtwitter API first)

  • Primary method: curl -sL "https://api.fxtwitter.com/{user}/status/{id}"
    • Returns JSON with tweet.text, tweet.author, tweet.media, tweet.raw_text.facets (links inside tweet)
    • Works without browser, no login, no CAPTCHA — fast and reliable
  • Extract username and ID from URL patterns: x.com/{user}/status/{id} or twitter.com/{user}/status/{id} or x.com/i/status/{id}
  • If fxtwitter fails, fallback to browser: browser_navigate to the tweet URL + snapshot
  • If both fail, tell the user the tweet is unreachable and ask them to paste the text

Step 2: Understand tweet context (CRITICAL — no blind clicking)

  • Read the full tweet text carefully
  • Determine the tweet's intent:
    • Sharing a tool/service → tweet describes or links to something useful
    • Announcing a launch → new product, repo, or feature
    • Thread/review → opinion about an existing tool
    • Mentioning a repo by name → no direct link, but repo name is in the text
    • Just a meme/comment → nothing to save, tell the user politely

Step 3: Extract the target URL

  • If the tweet contains a link → analyze what it links to:
    • GitHub URL → use gh api for structured data
    • Website URL → scrape with web_fetch
    • Another tweet/thread → follow if relevant, otherwise skip
  • If NO link but the tweet mentions a tool/repo name:
    • Search GitHub: gh search repos <name> --limit 5
    • Or search the web if it's not a repo
  • If multiple links → use the tweet context to determine which one is the main target

Step 4: Extract info from the target

  • Follow the standard extraction (Steps 2-4 from the main workflow)
  • Combine tweet context with page data for richer description

Step 5: Present for approval (MANDATORY)

  • Show the user:
    • Tweet summary: what the tweet said
    • Extracted target: URL that was found/followed
    • Tool info: title, description, category, tags, price
  • Ask: "Save this to SnipLink? (yes / no / edit)"
  • NEVER auto-save from tweets — the user must always confirm

Step 6: Save or discard

  • On approval → save to database with tweet URL as a source field in notes
  • On rejection → discard cleanly

2. Retrieving Saved Links

  • By category: "Show me all AI tools"
  • By search: "Find something for PDF editing"
  • By tag: "Show me everything tagged free"
  • Full list: "List all my saved tools"

3. Tool Suggestions (Opt-In)

When user asks for project help, search by relevant tags/categories and suggest.

GitHub Integration

GitHub URLs get treated specially via gh api:

# Repo metadata example
gh api repos/{owner}/{repo}

Extracted fields: name, description, language, stargazers_count, topics, license, updated_at, homepage, html_url

No web scraping needed for GitHub — clean, fast, accurate.

Content Boundaries

Never scrape:

  • Pages behind login, paywall, or CAPTCHA
  • Pages blocked by robots.txt
  • URLs containing personal data (iCloud, Google Drive shared links, etc.)

Sanitization:

  • Strip tracking params from URLs before saving (utm_*, fbclid, etc.)
  • Never store OAuth tokens, API keys, or session IDs

Pitfall: Unreachable Sites

When curl and browser both fail to reach a site (timeouts, connection refused), stop retrying after 2 attempts. The issue is connectivity, not permissions.

  1. Tell the user clearly: "The site is unreachable from this environment — not a permission issue."
  2. Offer alternatives: user describes it manually, save with minimal info and update later, or try from a different network.
  3. Do NOT keep retrying the same failing approach — it frustrates the user and wastes turns.
  4. Before scraping, briefly state what you're about to do: "Let me scrape the site for details before saving."

Storage: Obsidian (Single Source of Truth)

SnipLink stores all saved tools as Obsidian notes in the user's vault. This centralizes all knowledge in one place and enables graph connections between tools, projects, and concepts.

Vault Location

~/Library/CloudStorage/GoogleDrive-abdulrahmanjahfali@gmail.com/My Drive/My Mind/SnipLink/

Structure

  • One markdown note per saved tool: SnipLink/{Tool Name}.md
  • Master index: SnipLink/SnipLink Index.md

Record Format (Obsidian Note)

---
title: Tool Name
url: https://example.com
category: Development
tags: [python, api, free]
price: "Free / $X/mo"
saved: 2026-04-04
---

# Tool Name

Description of what it does.

## Details
- **Use case:** What it's used for
- **Notes:** Extra info, source, stats

## Contact
- **Email:** ...
- **Website:** ...

## Social
- [LinkedIn](...)
- [Twitter](...)

Index Format (SnipLink Index.md)

Update the index file when saving a new entry. Use Obsidian wiki-links [[Tool Name]] for graph connections. Include Dataview queries for dynamic listing.

Retrieving Links from Obsidian

When user asks to search saved tools:

  1. Use obsidian skill to search/read notes in the SnipLink/ folder
  2. Search by tags, category, title, or content
  3. Present results as a summary

Comments

Loading comments...