File size: 1,218 Bytes
a0c1e14 4c2cd34 a0c1e14 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 | ---
base_model: Qwen/Qwen3.6-35B-A3B
library_name: mlx
pipeline_tag: text-generation
license: apache-2.0
tags:
- mlx
- omlx
- oq
- oq2
- quantized
---
# Qwen3.6-35B-A3B-oQ2
oQ2 mixed-precision MLX quantization produced via [oMLX](https://github.com/jundot/omlx).
- **Quantization:** oQ2 (sensitivity-driven mixed precision, group_size=64)
- **Format:** MLX safetensors
- **Compatible with:** mlx-lm, mlx-vlm, oMLX on Apple Silicon
## Usage
```python
from mlx_lm import load, generate
model, tokenizer = load("bearzi/Qwen3.6-35B-A3B-oQ2")
prompt = tokenizer.apply_chat_template(
[{"role": "user", "content": "Hello"}],
add_generation_prompt=True,
)
print(generate(model, tokenizer, prompt=prompt, max_tokens=512, verbose=True))
```
## About oQ
oQ measures per-layer quantization sensitivity through calibration and allocates bits where they matter most — critical layers stay at higher precision, tolerant layers compress aggressively. Target averages of 2/3/4/6/8 bits are provided; actual per-layer bits vary by measured sensitivity.
See [oQ documentation](https://github.com/jundot/omlx/blob/main/docs/oQ_Quantization.md).
Comparative benchmarks and feedback welcome — please open a discussion.
|