Nemotron-3-Nano-4B - TurboQuant MLX 4-bit
4-bit weight-quantized MLX version of nvidia/NVIDIA-Nemotron-3-Nano-4B-BF16 with TurboQuant KV-cache quantization. Optimized for Apple Silicon inference via the MLX framework. A good balance between model quality and memory efficiency. The dense hybrid Mamba-2 + Attention architecture supports up to 262K context length.
Approximate model size: ~2.3 GB
Model Specifications
| Property | Value |
|---|---|
| Base Model | nvidia/NVIDIA-Nemotron-3-Nano-4B-BF16 |
| Parameters | 4 billion (dense) |
| Architecture | Hybrid Mamba-2 + Attention (dense) |
| Context Length | 262,144 tokens (262K) |
| License | NVIDIA Open Model License (commercial use OK) |
| Weight Quantization | 4-bit (~2.3 GB) |
| KV-Cache Quantization | TurboQuant |
| Framework | MLX (Apple Silicon) |
Quickstart
from mlx_lm import load, generate
from turboquant import TurboQuantCache
model, tokenizer = load("majentik/Nemotron-3-Nano-4B-TurboQuant-MLX-4bit")
prompt = "Explain the theory of relativity."
response = generate(model, tokenizer, prompt=prompt, max_tokens=512)
print(response)
What is TurboQuant?
TurboQuant (arXiv: 2504.19874) is a KV-cache quantization technique that compresses the key-value cache used during autoregressive generation. Combined with 4-bit weight quantization in MLX, this provides a dual compression strategy: smaller model weights plus compressed KV cache for efficient long-context generation.
Key benefits:
- No weight modification -- model weights stay at original precision
- Reduced inference memory -- KV cache is compressed significantly
- Longer context windows -- fit more tokens in the same GPU memory
- Minimal quality loss -- carefully designed quantization preserves generation quality
KV-Cache Quantization Comparison
| Method | Prefill Speed | Decode Speed | Memory Savings | Reference |
|---|---|---|---|---|
| TurboQuant | 1x (baseline) | 1x (baseline) | High | arXiv: 2504.19874 |
| RotorQuant | 5.3x faster | 28% faster | High | GitHub |
Memory Estimates (Nemotron-3-Nano-4B)
| Precision | Approximate Size | MLX Variant |
|---|---|---|
| BF16 (original) | ~8 GB | -- |
| 8-bit quantized | ~4 GB | TurboQuant-MLX-8bit |
| 4-bit quantized | ~2.3 GB | This model |
| 2-bit quantized | ~1.2 GB | TurboQuant-MLX-2bit |
Hardware Requirements
This model requires approximately 2.3 GB of unified memory. Recommended hardware:
- Apple M1 (8 GB+)
- Apple M2 (8 GB+)
- Apple M3 (8 GB+)
- Apple M4 (8 GB+)
- Any Apple Silicon Mac with 8 GB+ unified memory
See Also
- nvidia/NVIDIA-Nemotron-3-Nano-4B-BF16 -- Base model
- majentik/Nemotron-3-Nano-4B-TurboQuant -- TurboQuant KV-cache only (transformers)
- majentik/Nemotron-3-Nano-4B-TurboQuant-MLX-8bit -- MLX 8-bit variant
- majentik/Nemotron-3-Nano-4B-TurboQuant-MLX-2bit -- MLX 2-bit variant
- majentik/Nemotron-3-Nano-4B-RotorQuant-MLX-4bit -- RotorQuant MLX 4-bit variant
- TurboQuant Paper (arXiv: 2504.19874)
- MLX Framework
- Downloads last month
- 165
4-bit
Model tree for majentik/Nemotron-3-Nano-4B-TurboQuant-MLX-4bit
Base model
nvidia/NVIDIA-Nemotron-Nano-12B-v2-Base