Qwen3.5-27B-oQ6

oQ6 mixed-precision MLX quantization produced via oMLX.

  • Quantization: oQ6 (sensitivity-driven, group_size=64)
  • Format: MLX safetensors, loadable with mlx-vlm and mlx-lm

Usage

pip install mlx-vlm
python3 -m mlx_vlm generate --model bearzi/Qwen3.5-27B-oQ6 --prompt "Your prompt here" --max-tokens 512

About oQ

oQ measures per-layer quantization sensitivity through calibration inference and allocates bits where they matter most — critical layers stay at higher precision, tolerant layers compress aggressively. See oMLX docs.

Downloads last month
142
Safetensors
Model size
6B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for bearzi/Qwen3.5-27B-oQ6

Base model

Qwen/Qwen3.5-27B
Quantized
(191)
this model

Collection including bearzi/Qwen3.5-27B-oQ6