Nemotron-3-Nano-4B - TurboQuant MLX 4-bit

4-bit weight-quantized MLX version of nvidia/NVIDIA-Nemotron-3-Nano-4B-BF16 with TurboQuant KV-cache quantization. Optimized for Apple Silicon inference via the MLX framework. A good balance between model quality and memory efficiency. The dense hybrid Mamba-2 + Attention architecture supports up to 262K context length.

Approximate model size: ~2.3 GB

Model Specifications

Property Value
Base Model nvidia/NVIDIA-Nemotron-3-Nano-4B-BF16
Parameters 4 billion (dense)
Architecture Hybrid Mamba-2 + Attention (dense)
Context Length 262,144 tokens (262K)
License NVIDIA Open Model License (commercial use OK)
Weight Quantization 4-bit (~2.3 GB)
KV-Cache Quantization TurboQuant
Framework MLX (Apple Silicon)

Quickstart

from mlx_lm import load, generate
from turboquant import TurboQuantCache

model, tokenizer = load("majentik/Nemotron-3-Nano-4B-TurboQuant-MLX-4bit")

prompt = "Explain the theory of relativity."
response = generate(model, tokenizer, prompt=prompt, max_tokens=512)
print(response)

What is TurboQuant?

TurboQuant (arXiv: 2504.19874) is a KV-cache quantization technique that compresses the key-value cache used during autoregressive generation. Combined with 4-bit weight quantization in MLX, this provides a dual compression strategy: smaller model weights plus compressed KV cache for efficient long-context generation.

Key benefits:

  • No weight modification -- model weights stay at original precision
  • Reduced inference memory -- KV cache is compressed significantly
  • Longer context windows -- fit more tokens in the same GPU memory
  • Minimal quality loss -- carefully designed quantization preserves generation quality

KV-Cache Quantization Comparison

Method Prefill Speed Decode Speed Memory Savings Reference
TurboQuant 1x (baseline) 1x (baseline) High arXiv: 2504.19874
RotorQuant 5.3x faster 28% faster High GitHub

Memory Estimates (Nemotron-3-Nano-4B)

Precision Approximate Size MLX Variant
BF16 (original) ~8 GB --
8-bit quantized ~4 GB TurboQuant-MLX-8bit
4-bit quantized ~2.3 GB This model
2-bit quantized ~1.2 GB TurboQuant-MLX-2bit

Hardware Requirements

This model requires approximately 2.3 GB of unified memory. Recommended hardware:

  • Apple M1 (8 GB+)
  • Apple M2 (8 GB+)
  • Apple M3 (8 GB+)
  • Apple M4 (8 GB+)
  • Any Apple Silicon Mac with 8 GB+ unified memory

See Also

Downloads last month
165
Safetensors
Model size
0.6B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for majentik/Nemotron-3-Nano-4B-TurboQuant-MLX-4bit

Paper for majentik/Nemotron-3-Nano-4B-TurboQuant-MLX-4bit