Nemotron-3-Nano-30B-A3B-RotorQuant-GGUF-Q5_K_M

GGUF Q5_K_M weight-quantized variant of nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16 optimised for use with RotorQuant KV cache compression via a dedicated llama.cpp fork.

Important: RotorQuant KV cache types (planar3, iso3) are not available in upstream llama.cpp, standard Ollama, or LM Studio. They require a specific llama.cpp fork. The GGUF file itself is a standard GGUF and works with any llama.cpp-compatible runtime using normal KV cache types (f16, q8_0, q4_0, etc.).

Overview

This model combines two independent compression techniques:

Technique What it does Requirement
GGUF Q5_K_M weight quantization Reduces model size from ~60 GB (BF16) to ~20.4 GB Any llama.cpp-compatible runtime
RotorQuant KV cache compression — block-diagonal Clifford-algebra rotors for 3-bit KV cache (--cache-type-k iso3 --cache-type-v iso3) Block-diagonal rotations / random rotation for compressed KV cache llama-cpp-turboquant fork only

Quickstart

Option A — With RotorQuant KV cache (fork required)

You must build from the RotorQuant-enabled llama.cpp fork:

# Clone and build the fork
git clone https://github.com/johndpope/llama-cpp-turboquant.git
cd llama-cpp-turboquant && git checkout feature/planarquant-kv-cache

# CUDA (Windows/Linux)
cmake -B build -DGGML_CUDA=ON -DCMAKE_BUILD_TYPE=Release && cmake --build build -j

# Metal (Apple Silicon)
cmake -B build -DGGML_METAL=ON -DGGML_METAL_EMBED_LIBRARY=ON -DCMAKE_BUILD_TYPE=Release && cmake --build build -j

# Run with RotorQuant KV cache
./build/bin/llama-cli -m Nemotron-3-Nano-30B-A3B-RotorQuant-GGUF-Q5_K_M.gguf \
  --cache-type-k iso3 --cache-type-v iso3 \
  -ngl 99 -fa \
  -p "Explain quantum computing"

# Or run as a server
./build/bin/llama-server -m Nemotron-3-Nano-30B-A3B-RotorQuant-GGUF-Q5_K_M.gguf \
  --cache-type-k iso3 --cache-type-v iso3 \
  -ngl 99 -fa --jinja

Option B — With standard llama.cpp / LM Studio / Ollama

The GGUF works as a normal quantised model. You won't get RotorQuant-specific KV cache benefits, but standard KV cache quantization (q8_0, q4_0) still reduces VRAM significantly.

llama.cpp (upstream)

llama-cli -m Nemotron-3-Nano-30B-A3B-RotorQuant-GGUF-Q5_K_M.gguf \
  --cache-type-k q8_0 --cache-type-v q8_0 \
  -ngl 99 -fa \
  -p "Explain quantum computing"

LM Studio

  1. Download the GGUF file and load in LM Studio.
  2. Enable Developer Mode (Settings → Developer).
  3. In the model loader's advanced settings, set Flash Attention to ON.
  4. Set K Cache Quantization and V Cache Quantization to q8_0 (or q4_0 for more aggressive VRAM savings).
  5. Note: LM Studio does not currently support RotorQuant's iso3 cache types. Track this feature request for updates.

Ollama

# Standard Ollama does not support RotorQuant cache types.
# Use with default or q8_0 KV cache via OLLAMA_KV_CACHE_TYPE=q8_0
OLLAMA_KV_CACHE_TYPE=q8_0 OLLAMA_FLASH_ATTENTION=1 ollama run majentik/Nemotron-3-Nano-30B-A3B-RotorQuant-GGUF-Q5_K_M

Specifications

Property Value
Base Model nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-BF16
Architecture Mamba-2 + Transformer hybrid Sparse MoE
Parameters 30.7B total, 3.2B active per token
Context Length 1M
Weight Quantization GGUF Q5_K_M (high quality, balanced 5-bit)
Original Size (BF16) ~60 GB
Quantized File Size ~20.4 GB
KV Cache (RotorQuant) 3-bit via --cache-type-k iso3 --cache-type-v iso3 (fork only)
KV Cache (standard) q8_0, q4_0, f16, etc. (any llama.cpp runtime)
License other
Modalities Text only
Compatible Runtimes llama.cpp, LM Studio, Ollama, koboldcpp

What is RotorQuant?

RotorQuant is a KV cache compression method based on Clifford algebra (Cl(3,0)) rotors. It was developed as a faster, more parameter-efficient alternative to Google's TurboQuant (ICLR 2026).

Instead of applying a dense d×d random orthogonal rotation matrix (as TurboQuant does), RotorQuant uses lightweight block-diagonal rotations — independent 2D/4D rotations per pair/quartet — achieving O(d) complexity instead of O(d log d), fully parallelisable with no inter-element dependencies.

Benchmarks from the RotorQuant repository (Llama 3.1 8B, RTX 5090 — results will vary by model and hardware):

Metric RotorQuant (iso3) TurboQuant Standard q4_0
Prefill Speed 3,822 tok/s 722 tok/s
Decode Speed 119 tok/s 93 tok/s
Perplexity (PPL) 6.91 7.07
KV Compression ~5× vs FP16 ~5× vs FP16 ~4× vs FP16
Rotation Parameters 4 per rotor 16,384 per matrix N/A

Note: These benchmarks are from the RotorQuant repository using Llama 3.1 8B on an RTX 5090. Performance on Nemotron-3-Nano-30B-A3B will differ. Independent benchmarks for this specific model are welcome — please open a discussion if you have results to share.

Current Status of RotorQuant in the Ecosystem

Runtime RotorQuant Support Standard KV Quant
llama.cpp (upstream) ❌ Not merged ✅ q8_0, q4_0, q4_1, iq4_nl, q5_0, q5_1
llama-cpp-turboquant fork ✅ planar3, iso3 ✅ All standard types
LM Studio Requested ✅ Via advanced settings
Ollama ❌ Not supported ✅ Via OLLAMA_KV_CACHE_TYPE
koboldcpp ❌ Not supported ✅ Standard types

Recommended Settings

For VRAM-constrained setups, standard q8_0 KV cache quantization already halves KV cache memory with negligible quality impact. Flash Attention should always be enabled — it is required for V cache quantization and improves memory efficiency regardless.

VRAM Suggested Configuration
24 GB (RTX 4090) Q5_K_M + q8_0 KV cache + Flash Attention, 8K–16K context
16 GB Q5_K_M + q4_0 KV cache + Flash Attention, 4K–8K context
48+ GB Q5_K_M + f16 KV cache, full 32K+ context

See Also

Downloads last month
108
GGUF
Model size
32B params
Architecture
nemotron_h_moe
Hardware compatibility
Log In to add your hardware

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for majentik/Nemotron-3-Nano-30B-A3B-RotorQuant-GGUF-Q5_K_M

Quantized
(37)
this model

Paper for majentik/Nemotron-3-Nano-30B-A3B-RotorQuant-GGUF-Q5_K_M