majentik commited on
Commit
5343ffa
Β·
verified Β·
1 Parent(s): d669d43

Update KV-cache card with accurate template and fork requirements

Browse files
Files changed (1) hide show
  1. README.md +98 -54
README.md CHANGED
@@ -1,94 +1,138 @@
1
  ---
2
- base_model: MiniMaxAI/MiniMax-M2.7
3
- library_name: transformers
4
- pipeline_tag: text-generation
5
  license: other
6
  license_name: minimax-model-license
7
  license_link: https://huggingface.co/MiniMaxAI/MiniMax-M2.7/blob/main/LICENSE
 
8
  tags:
 
 
9
  - minimax
10
  - m2.7
11
  - moe
12
  - quantized
13
- - rotorquant
14
- - kv-cache-quantization
15
  ---
16
 
17
  # MiniMax-M2.7-RotorQuant
18
 
19
- **KV-cache quantized variant of [MiniMaxAI/MiniMax-M2.7](https://huggingface.co/MiniMaxAI/MiniMax-M2.7) using RotorQuant compression.**
20
 
21
- ## Overview
22
 
23
- MiniMax-M2.7 is a massive 256-expert Mixture-of-Experts (MoE) model with 8 experts active per token, totaling approximately 456 billion parameters. This variant applies **RotorQuant** KV-cache quantization, which uses Hadamard rotation transforms to distribute outlier magnitudes before quantizing the KV cache.
24
 
25
- RotorQuant applies a learned rotation matrix (Hadamard transform) to keys and values before quantization, smoothing the activation distribution. This yields better quality retention than naive per-channel methods, especially at aggressive quantization levels.
26
 
27
- | Property | Value |
28
- |---|---|
29
- | Architecture | MoE (256 experts, 8 active/token) |
30
- | Total Parameters | ~456B |
31
- | Layers | 62 |
32
- | Hidden Size | 3072 |
33
- | Attention Heads | 48 |
34
- | Quantization | RotorQuant (KV-cache) |
35
- | Base Model | MiniMaxAI/MiniMax-M2.7 |
36
 
37
  ## Quickstart
38
 
 
 
 
 
 
 
 
 
 
 
39
  ```python
 
40
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
41
 
42
- model_id = "majentik/MiniMax-M2.7-RotorQuant"
43
-
44
- tokenizer = AutoTokenizer.from_pretrained(model_id)
45
  model = AutoModelForCausalLM.from_pretrained(
46
- model_id,
 
47
  device_map="auto",
48
- torch_dtype="auto",
49
  )
50
 
51
- # Enable RotorQuant (IsoQuant) KV-cache quantization
52
- from transformers import IsoQuantCache
53
-
54
- past_key_values = IsoQuantCache(model.config)
55
-
56
- messages = [{"role": "user", "content": "What is a Comprehensive Geriatric Assessment?"}]
57
- inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
58
 
 
59
  outputs = model.generate(
60
- inputs,
61
- past_key_values=past_key_values,
62
- max_new_tokens=512,
 
63
  )
64
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
  ```
66
 
67
- ## RotorQuant vs TurboQuant
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
 
69
- | Feature | RotorQuant | TurboQuant |
70
- |---|---|---|
71
- | Technique | Rotation-based KV quantization (Hadamard transform) | Asymmetric per-channel KV quantization |
72
- | Throughput | Slightly lower throughput (rotation overhead) | Higher throughput, lower latency |
73
- | Quality | Better quality preservation at low bit-widths | Good quality preservation |
74
- | Best For | Quality-sensitive tasks, research | High-throughput serving, long contexts |
75
 
76
- ## Memory Estimates (Apple Silicon)
 
 
 
 
 
 
 
 
77
 
78
- Given the massive 456B parameter count, even with KV-cache quantization the model weights dominate memory usage. KV-cache quantization primarily helps with long-context inference overhead.
79
 
80
- | Configuration | Estimated Memory |
81
- |---|---|
82
- | FP16 weights + RotorQuant KV | ~912 GB |
83
- | 8-bit weights + RotorQuant KV | ~456 GB |
84
- | 4-bit weights + RotorQuant KV | ~228 GB |
85
 
86
- > **Note**: This model requires substantial hardware. For Apple Silicon deployment with reduced memory, see the MLX quantized variants.
 
87
 
88
  ## See Also
89
 
90
- - [MiniMaxAI/MiniMax-M2.7](https://huggingface.co/MiniMaxAI/MiniMax-M2.7) -- Base model
91
- - [majentik/MiniMax-M2.7-TurboQuant](https://huggingface.co/majentik/MiniMax-M2.7-TurboQuant) -- TurboQuant KV-cache variant
92
- - [majentik/MiniMax-M2.7-RotorQuant-MLX-8bit](https://huggingface.co/majentik/MiniMax-M2.7-RotorQuant-MLX-8bit) -- MLX 8-bit
93
- - [majentik/MiniMax-M2.7-RotorQuant-MLX-4bit](https://huggingface.co/majentik/MiniMax-M2.7-RotorQuant-MLX-4bit) -- MLX 4-bit
94
- - [majentik/MiniMax-M2.7-RotorQuant-MLX-3bit](https://huggingface.co/majentik/MiniMax-M2.7-RotorQuant-MLX-3bit) -- MLX 3-bit
 
1
  ---
 
 
 
2
  license: other
3
  license_name: minimax-model-license
4
  license_link: https://huggingface.co/MiniMaxAI/MiniMax-M2.7/blob/main/LICENSE
5
+ base_model: MiniMaxAI/MiniMax-M2.7
6
  tags:
7
+ - rotorquant
8
+ - kv-cache-quantization
9
  - minimax
10
  - m2.7
11
  - moe
12
  - quantized
13
+ library_name: transformers
14
+ pipeline_tag: text-generation
15
  ---
16
 
17
  # MiniMax-M2.7-RotorQuant
18
 
19
+ **RotorQuant KV cache compression** for [MiniMaxAI/MiniMax-M2.7](https://huggingface.co/MiniMaxAI/MiniMax-M2.7).
20
 
21
+ This is a **documentation repository** that explains how to combine MiniMax-M2.7's weights with RotorQuant inference-time KV cache compression. No weights are stored here β€” use the base model directly and apply RotorQuant via the Python package or llama.cpp fork.
22
 
23
+ ## What is this?
24
 
25
+ KV cache compression reduces the memory used by the attention cache during inference. Unlike weight quantization (which is baked into the GGUF/MLX file), KV cache compression is applied at runtime β€” so the same base weights can be used with or without compression.
26
 
27
+ | Technique | Where it's applied | Savings |
28
+ |-----------|-------------------|---------|
29
+ | Weight quantization (GGUF/MLX/AWQ) | Baked into model file | Reduces disk + weight memory |
30
+ | **RotorQuant KV cache** | At inference time | Reduces attention memory (critical for long context) |
31
+
32
+ Both can be combined for maximum efficiency.
 
 
 
33
 
34
  ## Quickstart
35
 
36
+ ### Option A β€” Python / transformers
37
+
38
+ Install the `rotorquant` package:
39
+
40
+ ```bash
41
+ pip install rotorquant
42
+ ```
43
+
44
+ Then use it with the base model:
45
+
46
  ```python
47
+ import torch
48
  from transformers import AutoModelForCausalLM, AutoTokenizer
49
+ from rotorquant import IsoQuantCache
50
 
51
+ tokenizer = AutoTokenizer.from_pretrained("MiniMaxAI/MiniMax-M2.7", trust_remote_code=True)
 
 
52
  model = AutoModelForCausalLM.from_pretrained(
53
+ "MiniMaxAI/MiniMax-M2.7",
54
+ torch_dtype=torch.bfloat16,
55
  device_map="auto",
56
+ trust_remote_code=True,
57
  )
58
 
59
+ # Apply RotorQuant to the KV cache
60
+ cache = IsoQuantCache(bits=4) # or bits=2 for more aggressive compression
 
 
 
 
 
61
 
62
+ inputs = tokenizer("Hello, how are you?", return_tensors="pt").to(model.device)
63
  outputs = model.generate(
64
+ **inputs,
65
+ max_new_tokens=128,
66
+ past_key_values=cache,
67
+ use_cache=True,
68
  )
69
+ print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))
70
+ ```
71
+
72
+
73
+ ### Option B β€” llama.cpp / LM Studio / Ollama (with fork)
74
+
75
+ RotorQuant KV cache types (`iso3`) are **not** in upstream llama.cpp. They require:
76
+ - [llama-cpp-turboquant fork](https://github.com/johndpope/llama-cpp-turboquant/tree/feature/planarquant-kv-cache)
77
+
78
+ Once built:
79
+
80
+ ```bash
81
+ llama-cli -m MiniMax-M2.7.gguf \
82
+ --cache-type-k iso3 --cache-type-v iso3 \
83
+ -ngl 99 -fa \
84
+ -p "Hello"
85
  ```
86
 
87
+ For standard runtimes (LM Studio, Ollama, upstream llama.cpp), use conventional KV cache types (`q8_0`, `q4_0`). You lose the RotorQuant-specific benefits but keep GGUF weight quantization.
88
+
89
+ ## Model Specifications
90
+
91
+ | Property | Value |
92
+ |----------|-------|
93
+ | Base Model | [MiniMaxAI/MiniMax-M2.7](https://huggingface.co/MiniMaxAI/MiniMax-M2.7) |
94
+ | Architecture | Sparse MoE (256 experts, 8 active) |
95
+ | Parameters | ~456B total (MoE) |
96
+ | Context Length | 256K |
97
+ | BF16 Size | ~912 GB |
98
+ | Modalities | Text |
99
+ | License | other |
100
+
101
+ ## What is RotorQuant?
102
+
103
+ [RotorQuant](https://github.com/scrya-com/rotorquant) is a KV cache compression method based on Clifford algebra (Cl(3,0)) rotors β€” a faster, more parameter-efficient alternative to Google's TurboQuant. Uses lightweight block-diagonal rotations (independent 2D/4D rotations per pair/quartet) achieving O(d) complexity instead of O(d log d), fully parallelisable with no inter-element dependencies.
104
+
105
+ **Benchmarks** (from the RotorQuant repository, Llama 3.1 8B on RTX 5090 β€” results vary by model and hardware):
106
+
107
+ - Prefill: 3,822 tok/s (vs TurboQuant 722 tok/s)
108
+ - Decode: 119 tok/s (vs TurboQuant 93 tok/s)
109
+ - Perplexity: 6.91 (vs TurboQuant 7.07)
110
+ - Parameters: 4 per rotor (vs TurboQuant 16,384)
111
+
112
+ > Benchmarks are from the RotorQuant repository using Llama 3.1 8B. Performance on MiniMax-M2.7 will differ. Please open a discussion if you have independent results.
113
 
114
+ ## Current Ecosystem Support
 
 
 
 
 
115
 
116
+ | Runtime | RotorQuant Support | Notes |
117
+ |---------|----------------------|-------|
118
+ | Python transformers + `rotorquant` | βœ… Full | Drop-in cache class |
119
+ | llama.cpp upstream | ❌ Not merged | Use fork below |
120
+ | llama-cpp-turboquant fork | βœ… `planar3`, `iso3` | [GitHub](https://github.com/johndpope/llama-cpp-turboquant/tree/feature/planarquant-kv-cache) |
121
+ | LM Studio | ❌ [Requested](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/1719) | Use `q8_0` as alternative |
122
+ | Ollama | ❌ Not supported | Use `OLLAMA_KV_CACHE_TYPE=q8_0` |
123
+ | vLLM | ❌ Not supported | β€” |
124
+ | koboldcpp | ❌ Not supported | β€” |
125
 
126
+ ## Pre-quantized weight variants
127
 
128
+ If you want combined weight + KV cache compression, majentik hosts pre-quantized versions:
 
 
 
 
129
 
130
+ - [MLX (Apple Silicon)](https://huggingface.co/majentik?search=MiniMax-M2.7+MLX)
131
+ - [GGUF (llama.cpp / Ollama / LM Studio)](https://huggingface.co/majentik?search=MiniMax-M2.7+GGUF)
132
 
133
  ## See Also
134
 
135
+ - [RotorQuant GitHub](https://github.com/scrya-com/rotorquant)
136
+ - [TurboQuant paper (arXiv 2504.19874)](https://arxiv.org/abs/2504.19874)
137
+ - [llama-cpp-turboquant fork](https://github.com/johndpope/llama-cpp-turboquant/tree/feature/planarquant-kv-cache)
138
+ - [Base model: MiniMaxAI/MiniMax-M2.7](https://huggingface.co/MiniMaxAI/MiniMax-M2.7)