Qwen3.5-27B-oQ3

oQ3 mixed-precision MLX quantization produced via oMLX.

  • Quantization: oQ3 (sensitivity-driven, group_size=64)
  • Format: MLX safetensors, loadable with mlx-vlm and mlx-lm

Usage

pip install mlx-vlm
python3 -m mlx_vlm generate --model bearzi/Qwen3.5-27B-oQ3 --prompt "Your prompt here" --max-tokens 512

About oQ

oQ measures per-layer quantization sensitivity through calibration inference and allocates bits where they matter most — critical layers stay at higher precision, tolerant layers compress aggressively. See oMLX docs.

Downloads last month
103
Safetensors
Model size
4B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

3-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bearzi/Qwen3.5-27B-oQ3

Base model

Qwen/Qwen3.5-27B
Quantized
(191)
this model

Collection including bearzi/Qwen3.5-27B-oQ3