Qwen3.5-27B-oQ
Collection
5 items • Updated
oQ6 mixed-precision MLX quantization produced via oMLX.
mlx-vlm and mlx-lmpip install mlx-vlm
python3 -m mlx_vlm generate --model bearzi/Qwen3.5-27B-oQ6 --prompt "Your prompt here" --max-tokens 512
oQ measures per-layer quantization sensitivity through calibration inference and allocates bits where they matter most — critical layers stay at higher precision, tolerant layers compress aggressively. See oMLX docs.
6-bit
Base model
Qwen/Qwen3.5-27B