ExLlamaV3 quantizations of Devstral-2-123B-Instruct-2512 with tensor-level (L3) optimization. Maximum effort applied towards the goal of achieving the best possible quantizations at the expense of time and compute.

Using this measurement.json file and the base quants provided, additional highly-optimized quantizations can be made in seconds at any reasonable bpw by anyone. All work done with ExLlamaV3 v0.0.18.

Optimized

VRAM-targeted quants using exl3's measure.py → optimize.py pipeline.

Size bpw Target
3.20bpw-h6-opt 50 GB 3.20 72GB @ 256k
3.90bpw-h6-opt 60 GB 3.90 72GB @ 128k
4.75bpw-h6-opt 72 GB 4.75 96GB @ 256k
5.45bpw-h6-opt 82 GB 5.45 96GB @ 128k
5.70bpw-h6-opt 85 GB 5.70 128GB @ 256k

The 5.70bpw quant hit the optimization ceiling - requesting 6.75bpw produced 5.70bpw output, indicating no further beneficial tensor swaps available.

Base

Size bpw
3.0bpw-h6 47 GB 3.0
4.0bpw-h6 61 GB 4.0
5.0bpw-h6 76 GB 5.0
6.0bpw-h6 90 GB 6.0
7.0bpw-h6 104 GB 7.0
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for amanwalksdownthestreet/Devstral-2-123B-Instruct-2512-exl3

Quantized
(18)
this model