| Woodbury, Peter... |
<th>SERVICE</th>
... | (3-1776)
SERVICE
Mass.
Woodbury, Peter
N... |
| 2 | Vermont 18.682
Peter Woodbury
of Win... | 18.682
Peter Woodbury
of London, is a d... | Vermont 18.682
Peter Woodbury
of Windsor... |
| 3 | 9316
Peter Woodbury
Admit
11 mos 28 ds
R... | 9316
Peter Woodley
Admit
11 mos 28d
Re... | 9316
Peter Woodbury
Admit
11 mos 28d
R... |
| 4 | BRIEF objections to the admission of Pen... | BRIEF objections to the admission of Pen... | BRIEF objections to the admission of Pen... |
| 5 | 2
XI. Has the applicant obtained the evi... | , has the applicant obtained the evidenc... | 2
XI. Has the applicant obtained the evi... |
✓ = exact match
Usage
Installation
pip install git+https://github.com/huggingface/transformers
pip install pillow torch
Python Usage
import torch
from transformers import LightOnOcrForConditionalGeneration, LightOnOcrProcessor
from PIL import Image
model_id = "wjbmattingly/LightOnOCR-2-1B-cultural-heritage-english"
device = "cuda" if torch.cuda.is_available() else "cpu"
dtype = torch.bfloat16 if device == "cuda" else torch.float32
processor = LightOnOcrProcessor.from_pretrained(model_id)
model = LightOnOcrForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=dtype,
).to(device)
image = Image.open("your_image.jpg").convert("RGB")
messages = [{"role": "user", "content": [{"type": "image"}]}]
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = processor(
text=[text],
images=[[image]],
return_tensors="pt",
padding=True,
size={"longest_edge": 700},
).to(device)
inputs["pixel_values"] = inputs["pixel_values"].to(dtype)
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=256, do_sample=False)
input_length = inputs["input_ids"].shape[1]
generated_ids = outputs[0, input_length:]
transcription = processor.decode(generated_ids, skip_special_tokens=True)
print(transcription)
Batch Inference
from datasets import load_dataset
dataset = load_dataset("wjbmattingly/si-test", split="train[:10]")
images = [[img.convert("RGB")] for img in dataset["image"]]
messages = [{"role": "user", "content": [{"type": "image"}]}]
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
texts = [text] * len(images)
inputs = processor(
text=texts,
images=images,
return_tensors="pt",
padding=True,
size={"longest_edge": 700},
).to(device)
inputs["pixel_values"] = inputs["pixel_values"].to(dtype)
outputs = model.generate(**inputs, max_new_tokens=256, do_sample=False)
predictions = processor.batch_decode(outputs[:, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
for pred, gt in zip(predictions, dataset["text"]):
print(f"Prediction: {pred}")
print(f"Ground Truth: {gt}")
print()
Training Details
- Base Model: lightonai/LightOnOCR-2-1B-base
- Training Method: Fine-tuning with frozen language model backbone
- Optimizer: AdamW (fused)
- Learning Rate: 6e-5 with linear decay
- Precision: bfloat16
Limitations
- This model is trained on line-level images. For full-page transcription, you need to first segment the page into individual lines.
- Performance may vary on document styles not represented in the training data.
Citation
If you use this model, please cite:
@misc{lightonocr2_finetuned_2026,
title = {LightOnOCR Fine-tuned for English},
author = {William Mattingly},
year = {2026},
howpublished = {\url{https://huggingface.co/wjbmattingly/LightOnOCR-2-1B-cultural-heritage-english}}
}
And the original LightOnOCR paper:
@misc{lightonocr2_2026,
title = {LightOnOCR: A 1B End-to-End Multilingual Vision-Language Model for State-of-the-Art OCR},
author = {Said Taghadouini and Adrien Cavaill\`{e}s and Baptiste Aubertin},
year = {2026},
howpublished = {\url{https://arxiv.org/pdf/2601.14251}}
}
Acknowledgments
|