Qwen3.5-397B-A17B-TurboQuant
TurboQuant KV cache compression for Qwen/Qwen3.5-397B-A17B.
This is a documentation repository that explains how to combine Qwen3.5-397B-A17B's weights with TurboQuant inference-time KV cache compression. No weights are stored here β use the base model directly and apply TurboQuant via the Python package or llama.cpp fork.
What is this?
KV cache compression reduces the memory used by the attention cache during inference. Unlike weight quantization (which is baked into the GGUF/MLX file), KV cache compression is applied at runtime β so the same base weights can be used with or without compression.
| Technique | Where it's applied | Savings |
|---|---|---|
| Weight quantization (GGUF/MLX/AWQ) | Baked into model file | Reduces disk + weight memory |
| TurboQuant KV cache | At inference time | Reduces attention memory (critical for long context) |
Both can be combined for maximum efficiency.
Quickstart
Option A β Python / transformers
Install the turboquant package:
pip install turboquant
Then use it with the base model:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from turboquant import TurboQuantCache
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3.5-397B-A17B", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen3.5-397B-A17B",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
# Apply TurboQuant to the KV cache
cache = TurboQuantCache(bits=4) # or bits=2 for more aggressive compression
inputs = tokenizer("Hello, how are you?", return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=128,
past_key_values=cache,
use_cache=True,
)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))
Option B β llama.cpp / LM Studio / Ollama (with fork)
TurboQuant KV cache types (planar3) are not in upstream llama.cpp. They require:
Once built:
llama-cli -m Qwen3.5-397B-A17B.gguf \
--cache-type-k planar3 --cache-type-v planar3 \
-ngl 99 -fa \
-p "Hello"
For standard runtimes (LM Studio, Ollama, upstream llama.cpp), use conventional KV cache types (q8_0, q4_0). You lose the TurboQuant-specific benefits but keep GGUF weight quantization.
Model Specifications
| Property | Value |
|---|---|
| Base Model | Qwen/Qwen3.5-397B-A17B |
| Architecture | Sparse MoE (17B active per token) |
| Parameters | 397B total, 17B active (MoE) |
| Context Length | 256K |
| BF16 Size | ~794 GB |
| Modalities | Text + Image (image-text-to-text) |
| License | apache-2.0 |
What is TurboQuant?
TurboQuant (ICLR 2026) applies random orthogonal rotations followed by optimal scalar quantization to the KV cache. Bit-identical prefill logits at 4-bit, up to 4-8Γ memory savings for long sequences.
Benchmarks (from the TurboQuant repository, Llama 3.1 8B on RTX 5090 β results vary by model and hardware):
- 4-bit KV cache: bit-identical prefill logits
- ~1.4-1.7Γ speedup on Apple Silicon
- Up to 8Γ KV memory savings
Benchmarks are from the TurboQuant repository using Llama 3.1 8B. Performance on Qwen3.5-397B-A17B will differ. Please open a discussion if you have independent results.
Current Ecosystem Support
| Runtime | TurboQuant Support | Notes |
|---|---|---|
Python transformers + turboquant |
β Full | Drop-in cache class |
| llama.cpp upstream | β Not merged | Use fork below |
| llama-cpp-turboquant fork | β
planar3, iso3 |
GitHub |
| LM Studio | β Requested | Use q8_0 as alternative |
| Ollama | β Not supported | Use OLLAMA_KV_CACHE_TYPE=q8_0 |
| vLLM | β Not supported | β |
| koboldcpp | β Not supported | β |
Pre-quantized weight variants
If you want combined weight + KV cache compression, majentik hosts pre-quantized versions:
See Also
Model tree for majentik/Qwen3.5-397B-A17B-TurboQuant
Base model
Qwen/Qwen3.5-397B-A17B