Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Kekik Crawler

v0.1.0-rc1

Scrapling-only, deterministic web crawler with clean SRP architecture, presets, checkpointing, and JSONL/report outputs.

0· 661·4 current·4 all-time
byÖmer Faruk Sancak@keyiflerolsun
MIT-0
Download zip
LicenseMIT-0 · Free to use, modify, and redistribute. No attribution required.
Security Scan
VirusTotalVirusTotal
Suspicious
View report →
OpenClawOpenClaw
Benign
high confidence
Purpose & Capability
The code implements a scrapling-based crawler and matches the name/description: crawl orchestration, fetcher, plugins, checkpointing, JSONL outputs and a report. No unrelated env vars, binaries, or external services are requested.
Instruction Scope
SKILL.md instructs pip install -r requirements.txt and running main.py which will fetch arbitrary web pages, write outputs/cache/checkpoint files, and load plugins from a plugin directory. The runtime can fetch robots.txt and search engine pages (DuckDuckGo/Bing/Yahoo/Brave). Plugin loading uses importlib.exec_module (executing plugin code) — expected for a plugin system, but worth reviewing.
Install Mechanism
No registry install spec is provided; the README/SKILL.md asks to pip install requirements.txt (selectolax, tenacity, orjson, scrapling). Installing from PyPI is normal but carries the usual supply-chain risk (verify packages/versions). There are no downloads from arbitrary URLs or archives in the manifest.
Credentials
The skill declares no required environment variables or credentials, which is proportionate for a web crawler. It writes local files (outputs/, SQLite cache) and does not request unrelated secrets.
Persistence & Privilege
always:false and no modifications to other skills or system-wide configs. The skill stores checkpoints, cache, and reports under outputs/ (normal). Note: presets intentionally set no_robots=True for research presets, which increases crawling aggressiveness but is an operational choice rather than a permission request.
Assessment
This package appears to be a straightforward local crawler, but review a few things before running: 1) Presets (person-research, deep-research) set no_robots=True — they will ignore robots.txt; only use that for acceptable targets. 2) The plugin system loads and executes Python files from the plugins directory — inspect any plugins you use (and do not point plugin_dir at an untrusted path). 3) The runtime will fetch pages and write outputs/cache/checkpoint files under outputs/; run in an isolated directory or container if you want to limit side effects. 4) The instructions require pip installing dependencies from PyPI (notably scrapling); verify package versions and provenance. If you want lower risk, run the tests (pytest) and review the included plugins and requirements before executing the crawler against external or internal networks.

Like a lobster shell, security has layers — review code before you run it.

latestvk97d7kyt28g6d54nfyv9j8q5nx81w5x1

License

MIT-0
Free to use, modify, and redistribute. No attribution required.

Comments