ExLlamaV3 quantizations of Devstral-2-123B-Instruct-2512 with tensor-level (L3) optimization. Maximum effort applied towards the goal of achieving the best possible quantizations at the expense of time and compute.
Using this measurement.json file and the base quants provided, additional highly-optimized quantizations can be made in seconds at any reasonable bpw by anyone. All work done with ExLlamaV3 v0.0.18.
Optimized
VRAM-targeted quants using exl3's measure.py → optimize.py pipeline.
| Size | bpw | Target | |
|---|---|---|---|
| 3.20bpw-h6-opt | 50 GB | 3.20 | 72GB @ 256k |
| 3.90bpw-h6-opt | 60 GB | 3.90 | 72GB @ 128k |
| 4.75bpw-h6-opt | 72 GB | 4.75 | 96GB @ 256k |
| 5.45bpw-h6-opt | 82 GB | 5.45 | 96GB @ 128k |
| 5.70bpw-h6-opt | 85 GB | 5.70 | 128GB @ 256k |
The 5.70bpw quant hit the optimization ceiling - requesting 6.75bpw produced 5.70bpw output, indicating no further beneficial tensor swaps available.
Base
Model tree for amanwalksdownthestreet/Devstral-2-123B-Instruct-2512-exl3
Base model
mistralai/Devstral-2-123B-Instruct-2512