TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate
Paper • 2504.19874 • Published • 32
4-bit weight-quantized MLX version of google/gemma-4-31B with TurboQuant KV-cache quantization. Optimized for Apple Silicon inference via the MLX framework.
Approximate model size: ~17 GB
| Property | Value |
|---|---|
| Base Model | google/gemma-4-31B |
| Parameters | 31 billion (dense transformer) |
| Architecture | Dense transformer (not MoE) |
| Modality | Multimodal: image + text input, text output |
| License | Apache 2.0 |
| Weight Quantization | 4-bit (~17 GB) |
| KV-Cache Quantization | TurboQuant |
| Framework | MLX (Apple Silicon) |
import mlx.core as mx
from mlx_lm import load, generate
model, tokenizer = load("majentik/gemma-4-31B-TurboQuant-MLX-4bit")
prompt = "Describe this image in detail."
response = generate(model, tokenizer, prompt=prompt, max_tokens=512)
print(response)
For multimodal usage with images:
from mlx_vlm import load, generate
model, processor = load("majentik/gemma-4-31B-TurboQuant-MLX-4bit")
prompt = "What do you see in this image?"
output = generate(model, processor, prompt=prompt, image="path/to/image.jpg", max_tokens=512)
print(output)
TurboQuant (arXiv: 2504.19874) is a KV-cache quantization technique that compresses the key-value cache used during autoregressive generation. Combined with 4-bit weight quantization in MLX, this provides a dual compression strategy: aggressively smaller model weights for reduced disk and memory footprint, plus compressed KV cache for efficient long-context generation.
| Method | Prefill Speed | Decode Speed | Memory Savings | Reference |
|---|---|---|---|---|
| TurboQuant | Baseline | Baseline | High | arXiv: 2504.19874 |
| RotorQuant | 5.3x faster | 28% faster | High | GitHub |
| Precision | Approximate Size | MLX Variant |
|---|---|---|
| FP16 (original) | ~62 GB | -- |
| 8-bit quantized | ~31 GB | TurboQuant-MLX-8bit |
| 4-bit quantized | ~17 GB | This model |
| 2-bit quantized | ~9 GB | TurboQuant-MLX-2bit |
This model requires approximately 17 GB of unified memory. Recommended hardware:
4-bit
Base model
google/gemma-4-31B