metadata
base_model: Qwen/Qwen3.6-35B-A3B
library_name: mlx
pipeline_tag: text-generation
license: apache-2.0
tags:
- mlx
- omlx
- oq
- oq2
- quantized
Qwen3.6-35B-A3B-oQ2
oQ2 mixed-precision MLX quantization produced via oMLX.
- Quantization: oQ2 (sensitivity-driven mixed precision, group_size=64)
- Format: MLX safetensors
- Compatible with: mlx-lm, mlx-vlm, oMLX on Apple Silicon
Usage
from mlx_lm import load, generate
model, tokenizer = load("bearzi/Qwen3.6-35B-A3B-oQ2")
prompt = tokenizer.apply_chat_template(
[{"role": "user", "content": "Hello"}],
add_generation_prompt=True,
)
print(generate(model, tokenizer, prompt=prompt, max_tokens=512, verbose=True))
About oQ
oQ measures per-layer quantization sensitivity through calibration and allocates bits where they matter most — critical layers stay at higher precision, tolerant layers compress aggressively. Target averages of 2/3/4/6/8 bits are provided; actual per-layer bits vary by measured sensitivity.
See oQ documentation.
Comparative benchmarks and feedback welcome — please open a discussion.