URL to Markdown
AdvisoryAudited by Static analysis on May 11, 2026.
Overview
No suspicious patterns detected.
Findings (0)
Artifact-based informational review of SKILL.md, metadata, install specs, static scan signals, and capability signals. ClawScan does not execute the skill or run runtime probes.
If invoked with a large or untrusted URL list, the agent could make many web requests and create local Markdown/image files in the chosen location.
The skill intentionally fetches arbitrary user-supplied web pages, supports batch processing, and can download images and write output files. This is purpose-aligned, but it is still meaningful network and file-system activity.
Works with any HTTP/HTTPS URL that returns HTML content... `-f, --file` File containing URLs to convert ... `--download-images` Download remote images to a local folder
Use intended URLs and output directories only, review batch URL files before running, and be cautious with optional image downloading.
It may be harder to verify that the bundled script matches the claimed upstream project or to track updates.
The registry metadata does not provide a verified source or homepage, even though a Python script is bundled and the README names a Git repository. There is no remote install or dependency download shown, so this is a provenance note rather than a concern.
Source: unknown; Homepage: none; No install spec — this is an instruction-only skill.
If provenance matters, inspect the bundled script and compare it with the repository referenced in the README before relying on it.
Generated Markdown may contain misleading, malicious, or instruction-like content from the source page if later fed back into an agent or knowledge base.
The skill can persist arbitrary web content into Markdown files intended for notes or knowledge bases. If those files are later reused as agent context, untrusted page text could be over-trusted.
Save web page content as .md for documentation, archiving, or note-taking... YAML Frontmatter... for knowledge-base workflows
Treat generated files as untrusted web-derived content, keep source URLs visible, and review them before adding them to persistent agent memory or a shared knowledge base.
