Install
openclaw skills install lightpandaLightpanda is a lightweight, Zig-based headless browser 9x faster and 16x more memory-efficient than Chrome for web scraping and content extraction.
openclaw skills install lightpandaLightpanda 是用 Zig 编写的轻量级无头浏览器,非 Chromium 分支。
性能对比:
| 指标 | Lightpanda | Headless Chrome | 差距 |
|---|---|---|---|
| 内存 (100页) | 123MB | 2GB | 16x 更省 |
| 速度 (100页) | 5s | 46s | 9x 更快 |
# Linux
curl -L -o lightpanda https://github.com/lightpanda-io/browser/releases/download/nightly/lightpanda-x86_64-linux && \
chmod a+x ./lightpanda
# macOS
curl -L -o lightpanda https://github.com/lightpanda-io/browser/releases/download/nightly/lightpanda-aarch64-macos && \
chmod a+x ./lightpanda
# 查看版本
./lightpanda version
# 抓取网页为 HTML
./lightpanda fetch --obey-robots --dump html --log-format pretty --log-level info <URL>
# 抓取网页为 Markdown(推荐)
./lightpanda fetch --obey-robots --dump markdown --log-format pretty --log-level info <URL>
# 等待加载后再抓取
./lightpanda fetch --obey-robots --dump markdown --wait-ms 3000 <URL>
# 等待特定元素
./lightpanda fetch --obey-robots --dump markdown --wait-selector ".content" <URL>
import subprocess
import re
def fetch_url(url, format="markdown", wait_ms=2000):
"""使用 Lightpanda 抓取网页"""
output_format = "markdown" if format == "markdown" else "html"
cmd = [
"./lightpanda", "fetch",
"--obey-robots",
"--dump", output_format,
"--wait-ms", str(wait_ms),
"--log-format", "pretty",
url
]
result = subprocess.run(cmd, capture_output=True, text=True)
return result.stdout
# 使用示例
content = fetch_url("https://example.com", "markdown")
print(content)
| 场景 | 说明 |
|---|---|
| 🌐 网页抓取 | 轻量快速,适合批量抓取 |
| 📄 内容提取 | 转 Markdown,方便后续处理 |
| 🔍 竞品分析 | 定期抓取页面内容 |
| 📰 新闻聚合 | 抓取文章内容 |
| 📊 数据监控 | 监控网页变化 |
--obey-robots--dump markdown 便于后续处理docker run -d --name lightpanda -p 127.0.0.1:9222:9222 lightpanda/browser:nightly
./lightpanda fetch --obey-robots --dump markdown --log-format pretty --log-level info https://news.ycombinator.com > output.md
import subprocess
import time
urls = [
"https://example.com/page1",
"https://example.com/page2",
"https://example.com/page3"
]
for url in urls:
print(f"Fetching: {url}")
result = subprocess.run(
["./lightpanda", "fetch", "--obey-robots", "--dump", "markdown", "--wait-ms", "2000", url],
capture_output=True,
text=True
)
# 处理 result.stdout
time.sleep(1) # 礼貌性延迟
import subprocess
def scrape_for_rag(url):
"""抓取网页用于 RAG 处理"""
result = subprocess.run(
["./lightpanda", "fetch", "--obey-robots", "--dump", "markdown", "--wait-ms", "3000", url],
capture_output=True,
text=True
)
return result.stdout