Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

AI Job Hunter Pro

v1.3.0

AI-powered job search assistant with RAG-based resume-JD matching, automated application pipeline, and status tracking. Use when the user wants to search for...

1· 182·0 current·0 all-time
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Suspicious
medium confidence
Purpose & Capability
Name/description describe resume-to-job matching, auto-apply, and tracking, which aligns with the included scrapers, RAG engine, apply_pipeline, and tracker. However, the skill only declares 'python3' as a required binary while the code depends on Playwright/browser automation and Python packages (playwright, chromadb/embedding libs, etc.). The README and scripts reference browser automation/Playwright and LLM/tool names (Claude, Gemini, ChromaDB), but no corresponding environment variables or explicit install instructions for browser binaries are declared in the registry metadata — this mismatch is noteworthy.
!
Instruction Scope
SKILL.md instructs reading ~/job_profile.json and the user's resume and storing a local DB (~/.ai-job-hunter-pro), which is expected. The skill promises to 'store all data locally' and 'never send resume data to external services other than the job platforms themselves', but the codebase contains a RAG engine and references to external LLMs and CDNs. Without inspecting rag_engine.py and setup_rag.py in full, it's unclear whether embeddings or LLM calls are local (sentence-transformers + Chroma) or call remote APIs (OpenAI/Anthropic/Google). The dashboard.html loads Chart.js and Google Fonts from CDNs (network calls) and the auto-apply pipeline performs browser automation using user sessions (which implies it may access browser cookies/sessions). The instructions correctly enforce dry-run by default, but they give the agent wide discretion to scrape many sites and perform automated submissions once configured — that level of access should be confirmed.
Install Mechanism
The skill is instruction-only (no registry install spec), which reduces installer-level risk. Setup instructions ask the user to pip install -r scripts/requirements.txt and run setup_rag.py --init. The code clearly depends on Playwright and browser binaries; one scraper prints a message asking the user to run 'playwright install chromium' if Playwright is not installed. There is no download-from-arbitrary-URL or extraction step in the registry metadata, but users must run pip and browser installs themselves — verify the requirements.txt and setup_rag.py before executing to see what packages/binaries will be installed.
!
Credentials
Registry metadata lists no required environment variables or primary credential, yet the codebase references model/LLM names (Claude, Gemini, OpenAI-like tokens in README and HIGHLIGHT_MAP strings) and platform integrations that commonly require credentials or sessions (LinkedIn, Boss直聘, Indeed). The skill will need browser sessions or API keys to submit applications; these are not declared. There is also a 'report_channel': 'whatsapp' present in the profile template suggesting possible external reporting channels, but no env vars or code shown for WhatsApp integration. The absence of declared credentials is inconsistent and should be clarified.
Persistence & Privilege
The skill does create persistent local state (profile at ~/job_profile.json and a SQLite DB at ~/.ai-job-hunter-pro/applications.db) which is consistent with its purpose. always is false and disable-model-invocation is default; the skill does not request elevated system-wide privileges. It stores data under the user's home directory rather than system locations, which is proportionate for this functionality.
What to consider before installing
Things to check before installing or enabling auto-apply: - Inspect scripts/requirements.txt and scripts/setup_rag.py to see exactly which Python packages and binaries will be installed (especially anything that talks to cloud APIs or downloads executables). If any packages call remote LLMs (openai, anthropic, google-cloud), confirm where API keys should go and whether data is sent off-host. - Open scripts/rag_engine.py and setup_rag.py and search for network calls or client libraries (openai, anthropic, google, requests, httpx). If the RAG flow uses remote LLMs/embedding APIs, expect to provide API keys and understand that resume text may be sent to those services — this would contradict the 'local-only' promise. - Confirm how browser sessions are handled for auto-apply: the code uses Playwright and will operate with your browser/session state. Understand that automation may use your logged-in accounts and cookies; this can trigger platform anti-bot defenses or account locks. - Keep the default dry-run and require_confirmation settings until you've manually reviewed outputs and test runs. Never enable automatic submit/auto_greet without testing in a controlled environment. - Note the dashboard pulls JS and fonts from CDNs (Chart.js, Google Fonts) which will cause your machine to make network requests to third-party providers when viewing the dashboard — consider hosting assets locally if you need full privacy. - If you need higher assurance, run the skill in an isolated environment (container or VM) and review network traffic during setup and first runs. If anything in rag_engine.py or other scripts references OPENAI_API_KEY / ANTHROPIC_API_KEY / GOOGLE_* or posts to non-job-platform endpoints, treat that as a red flag. If you can share the full contents of scripts/rag_engine.py, scripts/setup_rag.py, and scripts/requirements.txt I can give a higher-confidence verdict and point to any exact lines that contact external APIs or require secrets.

Like a lobster shell, security has layers — review code before you run it.

latestvk97d2zpw7fejh5jeqgzfyhq0vd83ddzt

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Runtime requirements

🎯 Clawdis
Binspython3

Comments