How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="noctrex/Chandra-OCR-GGUF",
	filename="",
)
llm.create_chat_completion(
	messages = "\"cats.jpg\""
)

These are quantizations of the model Chandra-OCR

Original model: https://huggingface.co/datalab-to/chandra

Try to use the best quality you can run.
For the mmproj, try to use the F32 version as it will produce the best results. F32 > BF16 > F16

Downloads last month
537
GGUF
Model size
8B params
Architecture
qwen3vl
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for noctrex/Chandra-OCR-GGUF

Quantized
(11)
this model