Install
openclaw skills install pdf-ocr-toolIntelligent PDF and image to Markdown converter using Ollama GLM-OCR with smart content detection (text/table/figure)
openclaw skills install pdf-ocr-toolUses the Ollama GLM-OCR model to intelligently recognize text, tables, and figures in PDF pages, applying the most appropriate prompts for OCR processing and outputting structured Markdown documents.
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
ollama pull glm-ocr:q8_0
# Install poppler-utils (for PDF to image conversion)
sudo apt install poppler-utils # Debian/Ubuntu
brew install poppler # macOS
# Install uv package manager
curl -LsSf https://astral.sh/uv/install.sh | sh
cd skills/pdf-ocr-tool
uv venv
source .venv/bin/activate
uv add requests Pillow
npx clawhub install pdf-ocr-tool
# Clone or download skill
git clone <repo> ~/.openclaw/workspace/skills/pdf-ocr-tool
# Create virtual environment and install dependencies
cd ~/.openclaw/workspace/skills/pdf-ocr-tool
uv venv
source .venv/bin/activate
uv add requests Pillow
# Run post-install script
bash hooks/post-install.sh
# Auto-detect content type (recommended)
python ocr_tool.py --input document.pdf --output result.md
# Specify processing mode
python ocr_tool.py --input document.pdf --output result.md --mode text
python ocr_tool.py --input document.pdf --output result.md --mode table
python ocr_tool.py --input document.pdf --output result.md --mode figure
# Mixed mode: split page into regions
python ocr_tool.py --input document.pdf --output result.md --granularity region
# Process a single image
python ocr_tool.py --input image.png --output result.md --mode mixed
# Specify Ollama host and port
python ocr_tool.py --input document.pdf --output result.md \
--host localhost --port 11434
# Use different model
python ocr_tool.py --input document.pdf --output result.md \
--model glm-ocr:q8_0
# Custom prompt
python ocr_tool.py --input image.png --output result.md \
--prompt "Convert this table to Markdown format, keeping rows and columns aligned"
# Save figure region images
python ocr_tool.py --input document.pdf --output result.md --save-images
# Set default configuration
export OLLAMA_HOST="localhost"
export OLLAMA_PORT="11434"
export OCR_MODEL="glm-ocr:q8_0"
# Run
python ocr_tool.py --input document.pdf --output result.md
| Mode | Description | Use Case |
|---|---|---|
auto | Auto-detect content type | General use (default) |
text | Pure text recognition | Academic papers, articles, reports |
table | Table recognition | Data tables, financial reports |
figure | Chart/figure recognition | Statistical charts, flowcharts, diagrams |
mixed | Mixed mode | Pages with multiple content types |
When using --granularity region:
# PDF to Markdown Result
**Total Pages**: 15
**Model**: glm-ocr:q8_0
**Mode**: auto
**Generated**: 2026-02-27T01:00:00+08:00
---
## Page 1
*Type: mixed*
### Region 1 (text)
[OCR recognized text content]
### Region 2 (table)
<table>
<tr><th>Column 1</th><th>Column 2</th></tr>
<tr><td>Data 1</td><td>Data 2</td></tr>
</table>
### Region 3 (figure)
[Chart description]

---
# image.png OCR Result
Model: glm-ocr:q8_0
Mode: table
---
[OCR recognized result]
The tool includes four built-in prompt templates in the prompts/ directory:
prompts/text.md)Convert the text in this region to Markdown format.
- Preserve paragraph structure and heading levels
- Handle lists correctly
- Preserve mathematical formulas
- Maintain citations and references
prompts/table.md)Convert the table in this region to Markdown table format.
- Maintain row and column alignment
- Preserve all data and values
- Handle merged cells
- Preserve headers and units
prompts/figure.md)Analyze the chart or image in this region:
1. Chart type (bar, line, pie, flowchart, etc.)
2. Titles and axis labels
3. Data trends and key observations
4. Important values and anomalies
Describe in Markdown format.
import subprocess
from pathlib import Path
# Process PDF (auto mode)
subprocess.run([
"python", "skills/pdf-ocr-tool/ocr_tool.py",
"--input", "/path/to/document.pdf",
"--output", "/tmp/result.md",
"--mode", "auto"
])
# Read result
with open("/tmp/result.md", "r") as f:
markdown_content = f.read()
# Process single image (table mode)
subprocess.run([
"python", "skills/pdf-ocr-tool/ocr_tool.py",
"--input", "/path/to/table.png",
"--output", "/tmp/table.md",
"--mode", "table"
])
# Mixed mode for complex PDF
subprocess.run([
"python", "skills/pdf-ocr-tool/ocr_tool.py",
"--input", "/path/to/mixed.pdf",
"--output", "/tmp/mixed.md",
"--granularity", "region", # Split into regions
"--save-images" # Save figure images
])
ollama pull glm-ocr:q8_0
ollama serve
sudo apt install poppler-utils # Debian/Ubuntu
brew install poppler # macOS
--mode text or --mode mixed--prompt "your prompt here"--granularity regioncd skills/pdf-ocr-tool
source .venv/bin/activate
uv sync # Reinstall all dependencies
This tool is developed and maintained by the OpenClaw community.
MIT License