Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Lucky Collaborative Research (Lucky + Jinx)

v1.0.1

Lucky (internet) + Jinx (analysis) collaborative research workflow. Lucky gathers raw data from web sources, Jinx analyzes and structures findings. Use for m...

0· 103·0 current·0 all-time

Install

OpenClaw Prompt Flow

Install with OpenClaw

Best for remote or guided setup. Copy the exact prompt, then paste it into OpenClaw for rmbell09-lang/lucky-collaborative-research.

Previewing Install & Setup.
Prompt PreviewInstall & Setup
Install the skill "Lucky Collaborative Research (Lucky + Jinx)" (rmbell09-lang/lucky-collaborative-research) from ClawHub.
Skill page: https://clawhub.ai/rmbell09-lang/lucky-collaborative-research
Keep the work scoped to this skill only.
After install, inspect the skill metadata and help me finish setup.
Use only the metadata you can verify from ClawHub; do not invent missing requirements.
Ask before making any broader environment changes.

Command Line

CLI Commands

Use the direct CLI path if you want to install manually and keep every step visible.

OpenClaw CLI

Bare skill slug

openclaw skills install lucky-collaborative-research

ClawHub CLI

Package manager switcher

npx clawhub@latest install lucky-collaborative-research
Security Scan
VirusTotalVirusTotal
Pending
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
The skill's name/description (Lucky gathers web data, Jinx analyzes) aligns with the runtime instructions: use Puppeteer to capture pages, store HTML/text, and post tasks to a local analysis service. However, the instructions presume access to specific local/remote resources (an SSH key at ~/.ssh/lucky_to_mac, a Mac at 100.90.7.148, and a mounted volume '/Volumes/Crucial X10') that are not mentioned anywhere else and are not justified in metadata — this is unexpected and should be explained by the author.
!
Instruction Scope
The SKILL.md tells operators to create directories, capture full page HTML/text, use a local SSH private key to scp files to a hardcoded IP, and POST tasks to http://localhost:3001. These instructions involve reading private files (e.g., ~/.ssh/lucky_to_mac), writing to mounted volumes, and transferring raw HTML (which may contain sensitive data). The doc also contains contradictory guidance: 'No executable content — Only pass text/HTML' vs 'Request execution — ask Jinx to run analysis scripts'. That open-ended ability to execute scripts on a local service increases risk if misused.
Install Mechanism
This is an instruction-only skill with no install spec and no bundled code, so it doesn't install packages or download remote artifacts. That lowers supply-chain risk, but the runtime steps still require external tools (Puppeteer, scp/ssh) which the doc assumes are available.
!
Credentials
The skill declares no required env vars or credentials, yet the instructions explicitly reference sensitive items (private SSH key at ~/.ssh/lucky_to_mac and a target host IP). That mismatch is disproportionate: the skill asks the operator to rely on sensitive local credentials and a remote host without declaring or justifying them. The advice to 'capture everything' increases the chance of collecting credentials or PII from scraped pages.
Persistence & Privilege
always is false and there is no indication the skill requests elevated system privileges or modifies other skills. Still, the workflow encourages broad data collection and writing to external/mounted storage, and it relies on a local analysis service (localhost:3001) which, if present, could be asked to execute arbitrary analysis scripts — increasing blast radius if that service is compromised or misconfigured.
What to consider before installing
This skill is plausible for collaborative scraping + local analysis, but it contains undeclared sensitive operational steps and some contradictions. Before using it, verify these points: 1) The hardcoded SSH key path (~/.ssh/lucky_to_mac) and target IP (100.90.7.148) — remove or replace them with explicit, auditable configuration and never embed private keys in instructions. 2) Confirm you control the remote host and mounted SSD; don't scp data to an unknown machine. 3) Avoid 'capture everything' on pages that may include credentials, PII, or license-restricted content; sanitize and filter before storage/transmission. 4) Clarify whether Jinx is allowed to execute scripts and sandbox it (no internet, least privilege). 5) Because the skill source is unknown and there's no homepage, prefer running this workflow in an isolated environment (dedicated VM or container) and review any SSH keys used — rotate them afterwards. If you cannot get clear, author-provided configuration and assurances about the remote host and key usage, treat this skill as risky and do not run its suggested transfer steps.

Like a lobster shell, security has layers — review code before you run it.

analysisvk977vw94w7sdbwam9a16h6vy6d83mwfklatestvk977vw94w7sdbwam9a16h6vy6d83mwfkmulti-agentvk977vw94w7sdbwam9a16h6vy6d83mwfkopenclawvk977vw94w7sdbwam9a16h6vy6d83mwfkresearchvk977vw94w7sdbwam9a16h6vy6d83mwfkweb-searchvk977vw94w7sdbwam9a16h6vy6d83mwfk
103downloads
0stars
2versions
Updated 1mo ago
v1.0.1
MIT-0

Collaborative Research Workflow

Core Principle: Divide research into Lucky (data gathering) + Jinx (analysis) for maximum efficiency and parallel processing.

When to Use This Skill

Perfect for:

  • Market research (competitor analysis, pricing)
  • API documentation review
  • Trend analysis (Google Trends, marketplaces)
  • Technical documentation analysis
  • Large-scale content analysis
  • Multi-source data comparison

Not suitable for:

  • Simple lookups (use direct web_search/web_fetch)
  • Real-time data that changes quickly
  • Single-page analysis (not worth the overhead)

The 3-Phase Process

Phase 1: Raw Data Gathering (Lucky)

Time: 30-60% of total project time
Focus: Speed and coverage, not precision

  1. Set up data directory structure

    mkdir -p /workspace/research/raw-data/YYYY-MM-DD-project
    
  2. Use Puppeteer for systematic data collection

    • Navigate to target sites
    • Capture BOTH html and text: { html: document.body.innerHTML, text: document.body.innerText }
    • Save with metadata: URL, timestamp, query/source
    • Don't fight DOM selectors — capture everything
  3. Save structured files for Jinx

    METADATA:
    URL: [source_url]
    TIMESTAMP: [iso_timestamp] 
    QUERY: [search_query]
    
    RAW TEXT:
    [page_text_content]
    
    RAW HTML:
    [full_html_content]
    
  4. Transfer to Mac Mini SSD

    scp -i ~/.ssh/lucky_to_mac file.html luckyai@100.90.7.148:~/temp/
    ssh -i ~/.ssh/lucky_to_mac luckyai@100.90.7.148 "mv ~/temp/* '/Volumes/Crucial X10/research/raw-data/project/'"
    

Phase 2: Parallel Analysis (Jinx)

Time: 20-40% of total project time
Focus: Pattern extraction and structured output

  1. Task Assignment Validation

    • ✅ Analyzing local files (no internet needed)
    • ✅ Structured data processing
    • ✅ Text analysis and extraction
  2. Send structured analysis tasks to Jinx

    curl -X POST http://localhost:3001/task -H 'Content-Type: application/json' -d '{
      "prompt": "Analyze files in /Volumes/Crucial X10/research/raw-data/project/. Extract: [specific_data_points]. Output structured JSON with [required_format]. Provide analysis summary with [specific_insights].",
      "priority": "high"
    }'
    
  3. Key prompting strategies for Jinx:

    • Be specific about data extraction requirements
    • Request JSON output format
    • Ask for both raw findings AND summary analysis
    • Include comparison requirements if multiple sources

Phase 3: Compilation & Skills Documentation (Lucky)

Time: 10-20% of total project time
Focus: Synthesis and actionable insights

  1. Collect Jinx results

    curl -s http://localhost:3001/results/[task-id]
    
  2. Compile comprehensive report

    • Executive summary with key findings
    • Structured data tables/comparisons
    • Strategic recommendations
    • Process insights and improvements
  3. Document process learnings

    • What worked well / areas for improvement
    • Time saved vs sequential approach
    • Quality of analysis vs manual extraction

Best Practices

Data Gathering (Lucky)

  • Capture everything — let Jinx filter, don't pre-filter
  • Use consistent file naming — project-source-timestamp.html
  • Include rich metadata — helps Jinx understand context
  • Work in batches — send first batch to Jinx while gathering more

Analysis Tasks (Jinx)

  • Be specific about extraction requirements
  • Request execution — ask Jinx to run analysis scripts, not just provide them
  • Structure output — JSON format for easy parsing
  • Ask for insights — not just data extraction but pattern analysis

Collaboration

  • Send tasks early — don't wait for all data before starting analysis
  • Check progress regularly — curl status API to monitor queue
  • Quality over quantity — better to analyze fewer sources deeply

Time Estimates

Research ScopeLucky TimeJinx TimeTotal Effective
Small (3-5 sources)20 min15 min25 min
Medium (5-10 sources)40 min20 min45 min
Large (10+ sources)60 min30 min70 min

Effective time = max(Lucky, Jinx) due to parallelization

Security Considerations

  • HTML sanitization — Strip <script> tags before sending to Jinx
  • No executable content — Only pass text/HTML data, never code
  • Local processing — Jinx has no internet access, data stays secure
  • File permissions — Ensure Jinx can read files on SSD

Success Metrics

  • Speed: 30-50% time savings vs sequential research
  • Coverage: Ability to analyze larger datasets comprehensively
  • Quality: Structured, actionable insights vs raw data dumps
  • Scalability: Process works for 5 sources or 50 sources

Example Use Cases

  1. Market Research: Lucky scrapes Gumroad/Etsy → Jinx extracts pricing/features
  2. API Comparison: Lucky gathers docs → Jinx compares capabilities/pricing
  3. Trend Analysis: Lucky gets Google Trends → Jinx identifies patterns
  4. Competitor Analysis: Lucky browses sites → Jinx structures competitive matrix
  5. Content Analysis: Lucky gathers articles → Jinx summarizes themes/insights

Market Research Template

For marketplace/competitor analysis specifically, use this structured approach:

Data Collection Checklist

For each competitor/product found:

## Competitor: [Name]
- Product: [Title]
- Price: $[Amount]
- Bundle Size: [X items]
- Format: [Canva/PSD/AI/etc]
- Sales Indicators: [Reviews/ratings/badges]
- Key Features: [List]
- Customer Complaints: [Common issues from reviews]
- Opportunities: [What they're missing]

Market Analysis Phases

  1. Market Mapping — Browse categories on target platforms (Gumroad, Etsy, Creative Market, Redbubble). Screenshot layouts. Document pricing patterns.
  2. Competitor Deep Dive — Top performers, pricing intelligence, positioning, visual trends.
  3. Customer Intelligence — Mine reviews for pain points, gaps, price sensitivity, feature requests.
  4. Trend Analysis — Style evolution, platform preferences, niche saturation, seasonal patterns.
  5. Gap Analysis — What customers want but can't find. Underserved niches.

Browser Research Workflow

  1. Start browser session
  2. Navigate to marketplace, search category
  3. Capture screenshots of results
  4. Visit top competitor pages
  5. Document structured data per template above
  6. Save to SSD, feed to Jinx for pattern analysis

Output Deliverables

  • Structured competitor profiles
  • Pricing analysis with recommendations
  • Market gap identification
  • Customer pain point summary
  • Launch strategy recommendations

Process Evolution

Track and improve:

  • Which DOM selectors/sites work best
  • Jinx prompt patterns that yield best results
  • File transfer automation opportunities
  • Quality indicators for different research types

This skill creates a scalable, repeatable process for any research requiring both web access and deep analysis.

Comments

Loading comments...