Skill flagged — suspicious patterns detected

ClawHub Security flagged this skill as suspicious. Review the scan results before using.

Doc-to-LoRA

v1.2.0

Internalize a document into a small language model (Gemma 2 2B) using Doc-to-LoRA so it can answer questions WITHOUT the document in the prompt. Use when the...

0· 346· 3 versions· 1 current· 1 all-time· Updated 2h ago· MIT-0
byManoj Bhat@manojbhat09

Doc-to-LoRA Skill

Internalize any document into a small model's weights in seconds. No fine-tuning loop, no RAG retrieval at query time. The model "knows" the document.

How It Works (30-second summary)

A trained hypernetwork reads your document and instantly generates LoRA adapter weights for every layer of Gemma 2 2B. The adapter is applied to the base model, which can then answer questions about the document without it being in the prompt.

Document --> Context Encoder --> Perceiver --> HyperLoRA --> LoRA weights
                                                                |
                                                    Apply to Gemma 2 2B
                                                                |
                                                    Answer questions (no doc in prompt)

For architecture details, read references/ARCHITECTURE.md in this skill directory.

Security Notes

  • Checkpoint loading: internalize.py uses torch.load(weights_only=False) because D2L checkpoints embed Python config dataclasses (AggregatorConfig, LoraConfig, HypernetConfig) alongside tensor weights. The upstream D2L project uses this format. Only load checkpoints you trust. The default checkpoint source is the official SakanaAI/doc-to-lora HuggingFace repository.
  • HF_TOKEN: Required for downloading gated Gemma weights. This is a sensitive secret. The scripts only pass it to huggingface-cli download and transformers model loading. It is not sent anywhere else.
  • No remote code execution: setup.sh does not download or execute remote scripts. It requires uv and python3 to be pre-installed by the user. All dependency installation is done via uv pip install with pinned versions.
  • Checkpoint integrity: After downloading, you can verify the checkpoint against the HuggingFace repo's commit hash. The download uses huggingface-cli which verifies checksums automatically.

Prerequisites

This skill runs inside a clone of the doc-to-lora repository. It is not a standalone tool.

Required before setup:

Run setup once. This installs Python dependencies and downloads model weights (~7GB total).

export HF_TOKEN=hf_your_token_here
bash ${CLAUDE_SKILL_DIR}/scripts/setup.sh

If setup was already completed, skip this step. Check with:

test -d trained_d2l/gemma_demo && echo "Weights present" || echo "Run setup first"

Workflow A: PyTorch Path (simpler, ~10GB RAM)

Use this when the user provides a document and wants answers. The internalize.py script handles both internalization and querying in one call.

Internalize a document and ask questions

python ${CLAUDE_SKILL_DIR}/scripts/internalize.py \
  --input "path/to/document.txt" \
  --question "What is the main finding?" \
  --checkpoint trained_d2l/gemma_demo/checkpoint-80000/pytorch_model.bin

Or pass text directly:

python ${CLAUDE_SKILL_DIR}/scripts/internalize.py \
  --text "Paste the document content here..." \
  --question "What is this about?"

For multiple questions, pass them comma-separated:

python ${CLAUDE_SKILL_DIR}/scripts/internalize.py \
  --input "path/to/document.txt" \
  --question "Question 1?,Question 2?,Question 3?"

For programmatic use, output results as JSON:

python ${CLAUDE_SKILL_DIR}/scripts/internalize.py \
  --input doc.txt --question "Q?" --output-json results.json

Workflow B: MLX Path (faster, ~6GB RAM, recommended for Mac)

Use this for best performance on Apple Silicon. Two-phase: export once, query fast.

Step 1: Export LoRA adapter from document

python scripts/export_d2l_to_mlx_adapter.py \
  --checkpoint trained_d2l/gemma_demo/checkpoint-80000/pytorch_model.bin \
  --context-file "path/to/document.txt" \
  --output-dir adapters_d2l

Step 2: Query with MLX (lightweight, Metal-accelerated)

python ${CLAUDE_SKILL_DIR}/scripts/query_mlx.py \
  --adapter-dir adapters_d2l \
  --question "What is the main finding?"

When to Use Which Path

ScenarioPathWhy
Quick one-off question about a docPyTorchSimpler, no export step
Many questions about the same docMLXExport once, query fast and cheap
RAM-constrained (16GB Mac)MLX~6GB vs ~10GB at query time
Multiple documents to compareMLXExport each, swap adapters instantly

Limitations

  • Base model: Gemma 2 2B only (with released weights). Small model = limited reasoning.
  • Document length: Up to ~6144 tokens (~4000-5000 words). Longer docs are chunked.
  • Training required for new base models: The hypernetwork must be trained (8xA100 GPUs) to support a different base model. Inference is Mac-friendly.
  • Factual recall, not reasoning: Best for "what does the doc say" questions, not deep multi-hop reasoning over the document.
  • No real-time updates: Once internalized, the adapter is static. Change the doc = re-internalize.

Troubleshooting

ProblemFix
ModuleNotFoundError: No module named 'ctx_to_lora'Run setup: bash ${CLAUDE_SKILL_DIR}/scripts/setup.sh
FileNotFoundError: trained_d2l/...Download weights: uv run huggingface-cli download SakanaAI/doc-to-lora --local-dir trained_d2l
FileNotFoundError: install_mac.shThis skill must be used inside a doc-to-lora repo clone that contains install_mac.sh
RuntimeError: MPS backend out of memoryUse MLX path instead, or close other apps
ImportError: bitsandbytesExpected on Mac. The scripts auto-disable quantization on non-CUDA.
Answers seem wrong / genericCheck if LoRA is applied: outputs should differ from baseline. Try rephrasing.

Version tags

latestvk97c4b58bd15csfbb9jph21shx82zdwf

Runtime requirements

OSmacOS
Binspython3, uv
EnvHF_TOKEN