rdtand commited on
Commit
35766e0
·
verified ·
1 Parent(s): 846293f

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ model.safetensors.index.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: minimax-m2.7
4
+ license_link: https://huggingface.co/MiniMaxAI/MiniMax-M2.7/blob/main/LICENSE
5
+ base_model: MiniMaxAI/MiniMax-M2.7
6
+ base_model_relation: quantized
7
+ tags:
8
+ - prismaquant
9
+ - mixed-precision
10
+ - quantization
11
+ - nvfp4
12
+ - fp8
13
+ - moe
14
+ - vllm
15
+ - compressed-tensors
16
+ library_name: vllm
17
+ inference: false
18
+ pipeline_tag: text-generation
19
+ ---
20
+
21
+ # MiniMax-M2.7 — PrismaQuant 3.20 bpp (vLLM)
22
+
23
+ Mixed-precision quantization of [MiniMax-M2.7](https://huggingface.co/MiniMaxAI/MiniMax-M2.7) using [prismaquant](https://github.com/RobTand/prismaquant). The artifact is a standard `compressed-tensors` checkpoint that vLLM serves natively — **no patches, no custom kernels, no forked runtime**.
24
+
25
+ This is a **single DGX Spark target**: the model fits in 121 GB unified memory with KV cache headroom for 32k context.
26
+
27
+ ## Headline numbers
28
+
29
+ | Metric | Value |
30
+ |---|---|
31
+ | Source model | MiniMax-M2.7 (228 B params, 256-expert MoE) |
32
+ | Disk size | **90 GB** (-58% vs FP8 source ~215 GB; -73% vs BF16 ~456 GB) |
33
+ | Achieved bpp | 3.20 |
34
+ | Format mix | 30,780 NVFP4 + 2,204 FP8_SOURCE Linears |
35
+ | Experts pruned | 4,960 of 15,872 (31 %), via REAP saliency + joint quant-prune knapsack |
36
+ | MoE config | num_local_experts = 176 (down from 256), top_k = 8 |
37
+ | vLLM patches required | **0** |
38
+
39
+ ## Quality
40
+
41
+ Spot-checked on coherent generations across agentic / math / coding prompts at temperature 0:
42
+
43
+ | Prompt | Result |
44
+ |---|---|
45
+ | Multi-segment train problem (math) | Step-by-step reasoning, exact answer 240 mi / 68.571 mph |
46
+ | Python `is_palindrome` | Clean, correct |
47
+ | Python `quicksort` | Clean, correct |
48
+ | Python `binary_search` | Clean, correct |
49
+ | Python `longest_substring_without_repeat` | Clean, sliding-window correct |
50
+ | Python `merge_two_lists` (linked list) | Clean, correct |
51
+ | Python `fibonacci` | Clean iterative impl with worked example |
52
+ | Rust `Point::distance` | Clean, uses `.hypot()` (numerically stable) |
53
+ | SQL top-5 customers by 2024 volume | Clean SQL, proper date-range filter |
54
+ | Tool calling (function-call API) | Clean `{"city": "Paris"}` JSON emission |
55
+ | Reasoning tags via `<think>` | Captured by `--reasoning-parser minimax_m2` |
56
+
57
+ Formal benchmarks (MMLU, GSM8K, HumanEval) deferred to a follow-up. The artifact is positioned as a **fits-on-Spark, serves-coherently** working release; rigorous benchmark numbers are next.
58
+
59
+ ## Throughput
60
+
61
+ | Metric | Value |
62
+ |---|---|
63
+ | Decode tok/s (single-stream, T0, ctx 128) | **~14 tok/s** |
64
+ | Prior NVINT3 baseline on same hardware | 3.78 tok/s |
65
+ | Speedup | **+265 %** |
66
+
67
+ Measured against the same DGX Spark host (GB10) used for the prior NVINT3 release. The NVFP4 + FP8 mix avoids NVINT-load expansion that pushed earlier 3.25 bpp artifacts to 116 GiB at runtime; this artifact's runtime ≈ disk ≈ 90 GB.
68
+
69
+ ## Format mix
70
+
71
+ ```
72
+ NVFP4 : 30,780 Linears (94.5 %) — experts + most attention projections
73
+ FP8_SOURCE : 2,204 Linears (6.7 %) — passthrough of natively-FP8 source weights
74
+ BF16 : 62 routers (router output dim shrunk to kept-experts only)
75
+ PRUNED : 14,880 Linear slots (4,960 experts × 3 weights, dropped per REAP)
76
+ ```
77
+
78
+ ## How it was produced
79
+
80
+ prismaquant's allocator solves a multi-choice knapsack over per-Linear `Δloss ≈ 0.5 · H_trace · MSE_W`, where `H_trace` is the empirical Fisher diagonal trace from a streaming probe and `MSE_W` is the measured per-format round-trip error. Joint expert pruning composes into the same knapsack: each MoE choice is `(format, dropped_expert_ids)` priced together.
81
+
82
+ For this release:
83
+
84
+ - **Calibration**: 32 chunks × 4 samples × 2048 seqlen ≈ 262 k tokens from `cal-mix-v1` (multi-domain agentic + math + coding mix).
85
+ - **Probe**: streaming layer-by-layer Fisher capture (incremental_probe).
86
+ - **Cost step**: per-(Linear, format) NVFP4 / FP8 / MXFP8 / BF16 round-trip MSE.
87
+ - **Allocator**: target-bits = 3.20 (knee within the user-specified 90-95 GB band).
88
+ - **Export**: native compressed-tensors with GPTQ + per-group scale-sweep activation-aware refinement on every NVFP4 Linear.
89
+
90
+ ## Serving
91
+
92
+ ```bash
93
+ vllm serve <this-repo> \
94
+ --quantization compressed-tensors \
95
+ --trust-remote-code \
96
+ --max-model-len 32768 \
97
+ --max-num-seqs 4 \
98
+ --gpu-memory-utilization 0.85 \
99
+ --kv-cache-dtype fp8 \
100
+ --enable-prefix-caching \
101
+ --enable-auto-tool-choice \
102
+ --tool-call-parser minimax_m2 \
103
+ --reasoning-parser minimax_m2
104
+ ```
105
+
106
+ Recommended: set `PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True` on UMA hardware (DGX Spark) to keep the cuda allocator from hoarding freed blocks.
107
+
108
+ ## Limitations / caveats
109
+
110
+ - **Calibration scale**: 262 k tokens is moderate. Heavy reasoning-chain or long-context tasks may benefit from a re-export against more diverse calibration data.
111
+ - **No formal benchmarks yet**: MMLU / GSM8K / HumanEval pending. The headline result is "fits on Spark + coherent generations on representative prompts".
112
+ - **MTP heads not present**: MiniMax-M2 has no MTP head (unlike Qwen3.5/3.6). No speculative decoding accelerator to ship alongside.
113
+ - **Pruned experts are gone**: 31 % of experts dropped per REAP saliency. Tasks heavily dependent on those specific experts could see degradation. Empirical probes show none on agentic/math/coding prompts.
114
+
115
+ ## Acknowledgements
116
+
117
+ - [MiniMaxAI](https://huggingface.co/MiniMaxAI) for the source model.
118
+ - [vLLM](https://github.com/vllm-project/vllm) for the compressed-tensors serving stack.
119
+ - [REAP](https://arxiv.org/abs/2410.21271) for the per-expert dropout-loss saliency formulation.
120
+ - HAQ / HAWQ / GPTQ / AutoRound / SmoothQuant for the mixed-precision allocation foundations.
121
+
122
+ ## License
123
+
124
+ Inherits the MiniMax-M2.7 license from the source model. See [base model card](https://huggingface.co/MiniMaxAI/MiniMax-M2.7) for terms.
chat_template.jinja ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {# ----------‑‑‑ special token variables ‑‑‑---------- #}
2
+ {%- set toolcall_begin_token = '<minimax:tool_call>' -%}
3
+ {%- set toolcall_end_token = '</minimax:tool_call>' -%}
4
+ {#- Tool Rendering Functions ============================================== -#}
5
+ {%- macro render_tool_namespace(namespace_name, tool_list) -%}
6
+ {%- for tool in tool_list -%}
7
+ <tool>{{ tool.function | tojson(ensure_ascii=False) }}</tool>
8
+ {% endfor -%}
9
+ {%- endmacro -%}
10
+ {%- macro visible_text(content) -%}
11
+ {%- if content is string -%}
12
+ {{ content }}
13
+ {%- elif content is iterable and content is not mapping -%}
14
+ {%- for item in content -%}
15
+ {%- if item is mapping and item.type == 'text' -%}
16
+ {{- item.text }}
17
+ {%- elif item is string -%}
18
+ {{- item }}
19
+ {%- endif -%}
20
+ {%- endfor -%}
21
+ {%- else -%}
22
+ {{- content }}
23
+ {%- endif -%}
24
+ {%- endmacro -%}
25
+ {#- System Message Construction ============================================ -#}
26
+ {%- macro build_system_message(system_message) -%}
27
+ {%- if system_message and system_message.content -%}
28
+ {{- visible_text(system_message.content) }}
29
+ {%- else -%}
30
+ {%- if model_identity is not defined -%}
31
+ {%- set model_identity = "You are a helpful assistant. Your name is MiniMax-M2.7 and is built by MiniMax." -%}
32
+ {%- endif -%}
33
+ {{- model_identity }}
34
+ {%- endif -%}
35
+
36
+ {#- Handle current_date -#}
37
+ {%- if system_message and system_message.current_date -%}
38
+ {{- '\n' ~ 'Current date: ' + system_message.current_date }}
39
+ {%- endif -%}
40
+ {#- Handle current_location -#}
41
+ {%- if system_message and system_message.current_location -%}
42
+ {{- '\n' ~ 'Current location: ' + system_message.current_location }}
43
+ {%- endif -%}
44
+ {%- endmacro -%}
45
+ {#- Main Template Logic ================================================= -#}
46
+ {#- Extract system message (only first message if it's system) -#}
47
+ {%- set system_message = none -%}
48
+ {%- set conversation_messages = messages -%}
49
+ {%- if messages and messages[0].role == "system" -%}
50
+ {%- set system_message = messages[0] -%}
51
+ {%- set conversation_messages = messages[1:] -%}
52
+ {%- endif -%}
53
+ {#- Get the last user message turn, for interleved thinking -#}
54
+ {%- set ns = namespace(last_user_index=-1) %}
55
+ {% for m in conversation_messages %}
56
+ {%- if m.role == 'user' %}
57
+ {% set ns.last_user_index = loop.index0 -%}
58
+ {%- endif %}
59
+ {%- endfor %}
60
+ {#- Render system message -#}
61
+ {{- ']~!b[' ~ ']~b]system' ~ '\n' }}
62
+ {{- build_system_message(system_message) }}
63
+ {#- Render tools if available -#}
64
+ {%- if tools -%}
65
+ {{- '\n\n' ~ '# Tools' ~ '\n' ~ 'You may call one or more tools to assist with the user query.\nHere are the tools available in JSONSchema format:' ~ '\n' }}
66
+ {{- '\n' ~ '<tools>' ~ '\n' }}
67
+ {{- render_tool_namespace("functions", tools) }}
68
+ {{- '</tools>' ~ '\n\n' }}
69
+ {{- 'When making tool calls, use XML format to invoke tools and pass parameters:' ~ '\n' }}
70
+ {{- '\n' ~ toolcall_begin_token }}
71
+ <invoke name="tool-name-1">
72
+ <parameter name="param-key-1">param-value-1</parameter>
73
+ <parameter name="param-key-2">param-value-2</parameter>
74
+ ...
75
+ </invoke>
76
+ {{- '\n' ~ toolcall_end_token }}
77
+ {%- endif -%}
78
+ {{- '[e~[\n' }}
79
+
80
+ {#- Render messages -#}
81
+ {%- set last_tool_call = namespace(name=none) -%}
82
+ {%- for message in conversation_messages -%}
83
+ {%- if message.role == 'assistant' -%}
84
+ {#- Only render reasoning_content if no user message follows -#}
85
+ {{- ']~b]ai' ~ '\n' }}
86
+
87
+ {%- set reasoning_content = '' %}
88
+ {%- set content = visible_text(message.content) %}
89
+ {%- if message.reasoning_content is string %}
90
+ {%- set reasoning_content = message.reasoning_content %}
91
+ {%- else %}
92
+ {%- if '</think>' in content %}
93
+ {%- set reasoning_content = content.split('</think>')[0].strip('\n').split('<think>')[-1].strip('\n') %}
94
+ {%- set content = content.split('</think>')[-1].strip('\n') %}
95
+ {%- endif %}
96
+ {%- endif %}
97
+ {%- if reasoning_content and loop.index0 > ns.last_user_index -%}
98
+ {{- '<think>' ~ '\n' ~ reasoning_content ~ '\n' ~ '</think>' ~ '\n\n' }}
99
+ {%- endif -%}
100
+ {%- if content -%}
101
+ {{- content }}
102
+ {%- endif -%}
103
+ {%- if message.tool_calls -%}
104
+ {{- '\n' ~ toolcall_begin_token ~ '\n' }}
105
+
106
+ {%- for tool_call in message.tool_calls -%}
107
+ {%- if tool_call.function %}
108
+ {%- set tool_call = tool_call.function %}
109
+ {%- endif %}
110
+ {{- '<invoke name="' + tool_call.name + '">' }}
111
+ {% set _args = tool_call.arguments %}
112
+ {%- for k, v in _args.items() %}
113
+ {{- '<parameter name="' + k + '">' }}
114
+ {{- v | tojson(ensure_ascii=False) if v is not string else v }}
115
+ {{- '</parameter>' }}
116
+ {% endfor %}
117
+ {{- '</invoke>' ~ '\n' }}
118
+ {%- endfor -%}
119
+
120
+ {{- toolcall_end_token}}
121
+ {%- set last_tool_call.name = message.tool_calls[-1].name -%}
122
+ {%- else -%}
123
+ {%- set last_tool_call.name = none -%}
124
+ {%- endif -%}
125
+ {{- '[e~[' ~ '\n' }}
126
+
127
+ {%- elif message.role == 'tool' -%}
128
+ {%- if last_tool_call.name is none -%}
129
+ {{- raise_exception("Message has tool role, but there was no previous assistant message with a tool call!") }}
130
+ {%- endif -%}
131
+ {%- if loop.first or (conversation_messages[loop.index0 - 1].role != 'tool') -%}
132
+ {{- ']~b]tool' }}
133
+ {%- endif -%}
134
+ {%- if message.content is string -%}
135
+ {{- '\n<response>' }}
136
+ {{- message.content }}
137
+ {{- '</response>' }}
138
+ {%- else -%}
139
+ {%- for tr in message.content -%}
140
+ {{- '\n<response>' }}
141
+ {{- tr.output if tr.output is defined else (tr.text if tr.type == 'text' and tr.text is defined else tr) }}
142
+ {{- '\n</response>' }}
143
+ {%- endfor -%}
144
+ {%- endif -%}
145
+ {%- if loop.last or (conversation_messages[loop.index0 + 1].role != 'tool') -%}
146
+ {{- '[e~[\n' -}}
147
+ {%- endif -%}
148
+
149
+ {%- elif message.role == 'user' -%}
150
+ {{- ']~b]user' ~ '\n' }}
151
+ {{- visible_text(message.content) }}
152
+ {{- '[e~[' ~ '\n' }}
153
+ {%- endif -%}
154
+ {%- endfor -%}
155
+
156
+ {#- Generation prompt -#}
157
+ {%- if add_generation_prompt -%}
158
+ {{- ']~b]ai' ~ '\n' ~ '<think>' ~ '\n' }}
159
+ {%- endif -%}
config.json ADDED
@@ -0,0 +1,722 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "MiniMaxM2ForCausalLM"
4
+ ],
5
+ "attn_type_list": [
6
+ 1,
7
+ 1,
8
+ 1,
9
+ 1,
10
+ 1,
11
+ 1,
12
+ 1,
13
+ 1,
14
+ 1,
15
+ 1,
16
+ 1,
17
+ 1,
18
+ 1,
19
+ 1,
20
+ 1,
21
+ 1,
22
+ 1,
23
+ 1,
24
+ 1,
25
+ 1,
26
+ 1,
27
+ 1,
28
+ 1,
29
+ 1,
30
+ 1,
31
+ 1,
32
+ 1,
33
+ 1,
34
+ 1,
35
+ 1,
36
+ 1,
37
+ 1,
38
+ 1,
39
+ 1,
40
+ 1,
41
+ 1,
42
+ 1,
43
+ 1,
44
+ 1,
45
+ 1,
46
+ 1,
47
+ 1,
48
+ 1,
49
+ 1,
50
+ 1,
51
+ 1,
52
+ 1,
53
+ 1,
54
+ 1,
55
+ 1,
56
+ 1,
57
+ 1,
58
+ 1,
59
+ 1,
60
+ 1,
61
+ 1,
62
+ 1,
63
+ 1,
64
+ 1,
65
+ 1,
66
+ 1,
67
+ 1
68
+ ],
69
+ "auto_map": {
70
+ "AutoConfig": "configuration_minimax_m2.MiniMaxM2Config",
71
+ "AutoModelForCausalLM": "modeling_minimax_m2.MiniMaxM2ForCausalLM"
72
+ },
73
+ "dtype": "bfloat16",
74
+ "head_dim": 128,
75
+ "hidden_act": "silu",
76
+ "hidden_size": 3072,
77
+ "intermediate_size": 1536,
78
+ "max_position_embeddings": 204800,
79
+ "model_type": "minimax_m2",
80
+ "mtp_transformer_layers": 1,
81
+ "num_attention_heads": 48,
82
+ "num_experts_per_tok": 8,
83
+ "num_hidden_layers": 62,
84
+ "num_key_value_heads": 8,
85
+ "num_local_experts": 176,
86
+ "num_mtp_modules": 3,
87
+ "qk_norm_type": "per_layer",
88
+ "quantization_config": {
89
+ "quant_method": "compressed-tensors",
90
+ "format": "mixed-precision",
91
+ "config_groups": {
92
+ "group_0": {
93
+ "format": "float-quantized",
94
+ "weights": {
95
+ "num_bits": 8,
96
+ "type": "float",
97
+ "strategy": "block",
98
+ "block_structure": [
99
+ 128,
100
+ 128
101
+ ],
102
+ "symmetric": true,
103
+ "dynamic": false,
104
+ "observer": "memoryless_minmax"
105
+ },
106
+ "input_activations": {
107
+ "num_bits": 8,
108
+ "type": "float",
109
+ "strategy": "tensor",
110
+ "symmetric": true,
111
+ "dynamic": true,
112
+ "observer": "memoryless_minmax"
113
+ },
114
+ "targets": [
115
+ "re:^model[.]layers[.]0[.]self_attn[.]k_proj$",
116
+ "re:^model[.]layers[.]0[.]self_attn[.]o_proj$",
117
+ "re:^model[.]layers[.]0[.]self_attn[.]q_proj$",
118
+ "re:^model[.]layers[.]0[.]self_attn[.]qkv_proj$",
119
+ "re:^model[.]layers[.]0[.]self_attn[.]v_proj$",
120
+ "re:^model[.]layers[.]1[.]self_attn[.]k_proj$",
121
+ "re:^model[.]layers[.]1[.]self_attn[.]o_proj$",
122
+ "re:^model[.]layers[.]1[.]self_attn[.]q_proj$",
123
+ "re:^model[.]layers[.]1[.]self_attn[.]qkv_proj$",
124
+ "re:^model[.]layers[.]1[.]self_attn[.]v_proj$",
125
+ "re:^model[.]layers[.]11[.]self_attn[.]k_proj$",
126
+ "re:^model[.]layers[.]11[.]self_attn[.]q_proj$",
127
+ "re:^model[.]layers[.]11[.]self_attn[.]qkv_proj$",
128
+ "re:^model[.]layers[.]11[.]self_attn[.]v_proj$",
129
+ "re:^model[.]layers[.]14[.]self_attn[.]k_proj$",
130
+ "re:^model[.]layers[.]14[.]self_attn[.]q_proj$",
131
+ "re:^model[.]layers[.]14[.]self_attn[.]qkv_proj$",
132
+ "re:^model[.]layers[.]14[.]self_attn[.]v_proj$",
133
+ "re:^model[.]layers[.]15[.]self_attn[.]k_proj$",
134
+ "re:^model[.]layers[.]15[.]self_attn[.]q_proj$",
135
+ "re:^model[.]layers[.]15[.]self_attn[.]qkv_proj$",
136
+ "re:^model[.]layers[.]15[.]self_attn[.]v_proj$",
137
+ "re:^model[.]layers[.]16[.]self_attn[.]k_proj$",
138
+ "re:^model[.]layers[.]16[.]self_attn[.]q_proj$",
139
+ "re:^model[.]layers[.]16[.]self_attn[.]qkv_proj$",
140
+ "re:^model[.]layers[.]16[.]self_attn[.]v_proj$",
141
+ "re:^model[.]layers[.]17[.]self_attn[.]k_proj$",
142
+ "re:^model[.]layers[.]17[.]self_attn[.]q_proj$",
143
+ "re:^model[.]layers[.]17[.]self_attn[.]qkv_proj$",
144
+ "re:^model[.]layers[.]17[.]self_attn[.]v_proj$",
145
+ "re:^model[.]layers[.]18[.]self_attn[.]k_proj$",
146
+ "re:^model[.]layers[.]18[.]self_attn[.]q_proj$",
147
+ "re:^model[.]layers[.]18[.]self_attn[.]qkv_proj$",
148
+ "re:^model[.]layers[.]18[.]self_attn[.]v_proj$",
149
+ "re:^model[.]layers[.]19[.]self_attn[.]k_proj$",
150
+ "re:^model[.]layers[.]19[.]self_attn[.]q_proj$",
151
+ "re:^model[.]layers[.]19[.]self_attn[.]qkv_proj$",
152
+ "re:^model[.]layers[.]19[.]self_attn[.]v_proj$",
153
+ "re:^model[.]layers[.]20[.]self_attn[.]k_proj$",
154
+ "re:^model[.]layers[.]20[.]self_attn[.]q_proj$",
155
+ "re:^model[.]layers[.]20[.]self_attn[.]qkv_proj$",
156
+ "re:^model[.]layers[.]20[.]self_attn[.]v_proj$",
157
+ "re:^model[.]layers[.]21[.]self_attn[.]k_proj$",
158
+ "re:^model[.]layers[.]21[.]self_attn[.]q_proj$",
159
+ "re:^model[.]layers[.]21[.]self_attn[.]qkv_proj$",
160
+ "re:^model[.]layers[.]21[.]self_attn[.]v_proj$",
161
+ "re:^model[.]layers[.]22[.]self_attn[.]k_proj$",
162
+ "re:^model[.]layers[.]22[.]self_attn[.]q_proj$",
163
+ "re:^model[.]layers[.]22[.]self_attn[.]qkv_proj$",
164
+ "re:^model[.]layers[.]22[.]self_attn[.]v_proj$",
165
+ "re:^model[.]layers[.]23[.]self_attn[.]k_proj$",
166
+ "re:^model[.]layers[.]23[.]self_attn[.]q_proj$",
167
+ "re:^model[.]layers[.]23[.]self_attn[.]qkv_proj$",
168
+ "re:^model[.]layers[.]23[.]self_attn[.]v_proj$",
169
+ "re:^model[.]layers[.]24[.]self_attn[.]k_proj$",
170
+ "re:^model[.]layers[.]24[.]self_attn[.]q_proj$",
171
+ "re:^model[.]layers[.]24[.]self_attn[.]qkv_proj$",
172
+ "re:^model[.]layers[.]24[.]self_attn[.]v_proj$",
173
+ "re:^model[.]layers[.]25[.]self_attn[.]k_proj$",
174
+ "re:^model[.]layers[.]25[.]self_attn[.]q_proj$",
175
+ "re:^model[.]layers[.]25[.]self_attn[.]qkv_proj$",
176
+ "re:^model[.]layers[.]25[.]self_attn[.]v_proj$",
177
+ "re:^model[.]layers[.]26[.]self_attn[.]k_proj$",
178
+ "re:^model[.]layers[.]26[.]self_attn[.]q_proj$",
179
+ "re:^model[.]layers[.]26[.]self_attn[.]qkv_proj$",
180
+ "re:^model[.]layers[.]26[.]self_attn[.]v_proj$",
181
+ "re:^model[.]layers[.]27[.]self_attn[.]k_proj$",
182
+ "re:^model[.]layers[.]27[.]self_attn[.]q_proj$",
183
+ "re:^model[.]layers[.]27[.]self_attn[.]qkv_proj$",
184
+ "re:^model[.]layers[.]27[.]self_attn[.]v_proj$",
185
+ "re:^model[.]layers[.]28[.]self_attn[.]k_proj$",
186
+ "re:^model[.]layers[.]28[.]self_attn[.]q_proj$",
187
+ "re:^model[.]layers[.]28[.]self_attn[.]qkv_proj$",
188
+ "re:^model[.]layers[.]28[.]self_attn[.]v_proj$",
189
+ "re:^model[.]layers[.]29[.]self_attn[.]k_proj$",
190
+ "re:^model[.]layers[.]29[.]self_attn[.]q_proj$",
191
+ "re:^model[.]layers[.]29[.]self_attn[.]qkv_proj$",
192
+ "re:^model[.]layers[.]29[.]self_attn[.]v_proj$",
193
+ "re:^model[.]layers[.]30[.]self_attn[.]k_proj$",
194
+ "re:^model[.]layers[.]30[.]self_attn[.]q_proj$",
195
+ "re:^model[.]layers[.]30[.]self_attn[.]qkv_proj$",
196
+ "re:^model[.]layers[.]30[.]self_attn[.]v_proj$",
197
+ "re:^model[.]layers[.]31[.]self_attn[.]k_proj$",
198
+ "re:^model[.]layers[.]31[.]self_attn[.]q_proj$",
199
+ "re:^model[.]layers[.]31[.]self_attn[.]qkv_proj$",
200
+ "re:^model[.]layers[.]31[.]self_attn[.]v_proj$",
201
+ "re:^model[.]layers[.]32[.]self_attn[.]k_proj$",
202
+ "re:^model[.]layers[.]32[.]self_attn[.]q_proj$",
203
+ "re:^model[.]layers[.]32[.]self_attn[.]qkv_proj$",
204
+ "re:^model[.]layers[.]32[.]self_attn[.]v_proj$",
205
+ "re:^model[.]layers[.]33[.]self_attn[.]k_proj$",
206
+ "re:^model[.]layers[.]33[.]self_attn[.]q_proj$",
207
+ "re:^model[.]layers[.]33[.]self_attn[.]qkv_proj$",
208
+ "re:^model[.]layers[.]33[.]self_attn[.]v_proj$",
209
+ "re:^model[.]layers[.]34[.]self_attn[.]k_proj$",
210
+ "re:^model[.]layers[.]34[.]self_attn[.]q_proj$",
211
+ "re:^model[.]layers[.]34[.]self_attn[.]qkv_proj$",
212
+ "re:^model[.]layers[.]34[.]self_attn[.]v_proj$",
213
+ "re:^model[.]layers[.]35[.]self_attn[.]k_proj$",
214
+ "re:^model[.]layers[.]35[.]self_attn[.]q_proj$",
215
+ "re:^model[.]layers[.]35[.]self_attn[.]qkv_proj$",
216
+ "re:^model[.]layers[.]35[.]self_attn[.]v_proj$",
217
+ "re:^model[.]layers[.]37[.]self_attn[.]k_proj$",
218
+ "re:^model[.]layers[.]37[.]self_attn[.]q_proj$",
219
+ "re:^model[.]layers[.]37[.]self_attn[.]qkv_proj$",
220
+ "re:^model[.]layers[.]37[.]self_attn[.]v_proj$",
221
+ "re:^model[.]layers[.]39[.]self_attn[.]k_proj$",
222
+ "re:^model[.]layers[.]39[.]self_attn[.]q_proj$",
223
+ "re:^model[.]layers[.]39[.]self_attn[.]qkv_proj$",
224
+ "re:^model[.]layers[.]39[.]self_attn[.]v_proj$",
225
+ "re:^model[.]layers[.]42[.]self_attn[.]k_proj$",
226
+ "re:^model[.]layers[.]42[.]self_attn[.]q_proj$",
227
+ "re:^model[.]layers[.]42[.]self_attn[.]qkv_proj$",
228
+ "re:^model[.]layers[.]42[.]self_attn[.]v_proj$",
229
+ "re:^model[.]layers[.]48[.]self_attn[.]k_proj$",
230
+ "re:^model[.]layers[.]48[.]self_attn[.]q_proj$",
231
+ "re:^model[.]layers[.]48[.]self_attn[.]qkv_proj$",
232
+ "re:^model[.]layers[.]48[.]self_attn[.]v_proj$",
233
+ "re:^model[.]layers[.]49[.]self_attn[.]k_proj$",
234
+ "re:^model[.]layers[.]49[.]self_attn[.]q_proj$",
235
+ "re:^model[.]layers[.]49[.]self_attn[.]qkv_proj$",
236
+ "re:^model[.]layers[.]49[.]self_attn[.]v_proj$",
237
+ "re:^model[.]layers[.]0[.]mlp[.]experts[.][0-9]+[.]down_proj$",
238
+ "re:^model[.]layers[.]0[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
239
+ "re:^model[.]layers[.]0[.]mlp[.]experts[.][0-9]+[.]up_proj$",
240
+ "re:^model[.]layers[.]1[.]mlp[.]experts[.][0-9]+[.]down_proj$",
241
+ "re:^model[.]layers[.]1[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
242
+ "re:^model[.]layers[.]1[.]mlp[.]experts[.][0-9]+[.]up_proj$",
243
+ "re:^model[.]layers[.]5[.]mlp[.]experts[.][0-9]+[.]down_proj$",
244
+ "re:^model[.]layers[.]5[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
245
+ "re:^model[.]layers[.]5[.]mlp[.]experts[.][0-9]+[.]up_proj$",
246
+ "re:^model[.]layers[.]7[.]mlp[.]experts[.][0-9]+[.]down_proj$",
247
+ "re:^model[.]layers[.]7[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
248
+ "re:^model[.]layers[.]7[.]mlp[.]experts[.][0-9]+[.]up_proj$"
249
+ ]
250
+ },
251
+ "group_1": {
252
+ "format": "nvfp4-pack-quantized",
253
+ "weights": {
254
+ "num_bits": 4,
255
+ "type": "float",
256
+ "strategy": "tensor_group",
257
+ "group_size": 16,
258
+ "symmetric": true,
259
+ "dynamic": false,
260
+ "scale_dtype": "torch.float8_e4m3fn",
261
+ "zp_dtype": "torch.float8_e4m3fn",
262
+ "observer": "memoryless_minmax"
263
+ },
264
+ "input_activations": {
265
+ "num_bits": 4,
266
+ "type": "float",
267
+ "strategy": "tensor_group",
268
+ "group_size": 16,
269
+ "symmetric": true,
270
+ "dynamic": "local",
271
+ "observer": "static_minmax",
272
+ "scale_dtype": "torch.float8_e4m3fn",
273
+ "zp_dtype": "torch.float8_e4m3fn"
274
+ },
275
+ "targets": [
276
+ "re:^model[.]layers[.]10[.]self_attn[.]k_proj$",
277
+ "re:^model[.]layers[.]10[.]self_attn[.]o_proj$",
278
+ "re:^model[.]layers[.]10[.]self_attn[.]q_proj$",
279
+ "re:^model[.]layers[.]10[.]self_attn[.]qkv_proj$",
280
+ "re:^model[.]layers[.]10[.]self_attn[.]v_proj$",
281
+ "re:^model[.]layers[.]11[.]self_attn[.]o_proj$",
282
+ "re:^model[.]layers[.]12[.]self_attn[.]k_proj$",
283
+ "re:^model[.]layers[.]12[.]self_attn[.]o_proj$",
284
+ "re:^model[.]layers[.]12[.]self_attn[.]q_proj$",
285
+ "re:^model[.]layers[.]12[.]self_attn[.]qkv_proj$",
286
+ "re:^model[.]layers[.]12[.]self_attn[.]v_proj$",
287
+ "re:^model[.]layers[.]13[.]self_attn[.]k_proj$",
288
+ "re:^model[.]layers[.]13[.]self_attn[.]o_proj$",
289
+ "re:^model[.]layers[.]13[.]self_attn[.]q_proj$",
290
+ "re:^model[.]layers[.]13[.]self_attn[.]qkv_proj$",
291
+ "re:^model[.]layers[.]13[.]self_attn[.]v_proj$",
292
+ "re:^model[.]layers[.]14[.]self_attn[.]o_proj$",
293
+ "re:^model[.]layers[.]15[.]self_attn[.]o_proj$",
294
+ "re:^model[.]layers[.]16[.]self_attn[.]o_proj$",
295
+ "re:^model[.]layers[.]17[.]self_attn[.]o_proj$",
296
+ "re:^model[.]layers[.]18[.]self_attn[.]o_proj$",
297
+ "re:^model[.]layers[.]19[.]self_attn[.]o_proj$",
298
+ "re:^model[.]layers[.]2[.]self_attn[.]k_proj$",
299
+ "re:^model[.]layers[.]2[.]self_attn[.]o_proj$",
300
+ "re:^model[.]layers[.]2[.]self_attn[.]q_proj$",
301
+ "re:^model[.]layers[.]2[.]self_attn[.]qkv_proj$",
302
+ "re:^model[.]layers[.]2[.]self_attn[.]v_proj$",
303
+ "re:^model[.]layers[.]20[.]self_attn[.]o_proj$",
304
+ "re:^model[.]layers[.]21[.]self_attn[.]o_proj$",
305
+ "re:^model[.]layers[.]22[.]self_attn[.]o_proj$",
306
+ "re:^model[.]layers[.]23[.]self_attn[.]o_proj$",
307
+ "re:^model[.]layers[.]24[.]self_attn[.]o_proj$",
308
+ "re:^model[.]layers[.]25[.]self_attn[.]o_proj$",
309
+ "re:^model[.]layers[.]26[.]self_attn[.]o_proj$",
310
+ "re:^model[.]layers[.]27[.]self_attn[.]o_proj$",
311
+ "re:^model[.]layers[.]28[.]self_attn[.]o_proj$",
312
+ "re:^model[.]layers[.]29[.]self_attn[.]o_proj$",
313
+ "re:^model[.]layers[.]3[.]self_attn[.]k_proj$",
314
+ "re:^model[.]layers[.]3[.]self_attn[.]o_proj$",
315
+ "re:^model[.]layers[.]3[.]self_attn[.]q_proj$",
316
+ "re:^model[.]layers[.]3[.]self_attn[.]qkv_proj$",
317
+ "re:^model[.]layers[.]3[.]self_attn[.]v_proj$",
318
+ "re:^model[.]layers[.]30[.]self_attn[.]o_proj$",
319
+ "re:^model[.]layers[.]31[.]self_attn[.]o_proj$",
320
+ "re:^model[.]layers[.]32[.]self_attn[.]o_proj$",
321
+ "re:^model[.]layers[.]33[.]self_attn[.]o_proj$",
322
+ "re:^model[.]layers[.]34[.]self_attn[.]o_proj$",
323
+ "re:^model[.]layers[.]35[.]self_attn[.]o_proj$",
324
+ "re:^model[.]layers[.]36[.]self_attn[.]k_proj$",
325
+ "re:^model[.]layers[.]36[.]self_attn[.]o_proj$",
326
+ "re:^model[.]layers[.]36[.]self_attn[.]q_proj$",
327
+ "re:^model[.]layers[.]36[.]self_attn[.]qkv_proj$",
328
+ "re:^model[.]layers[.]36[.]self_attn[.]v_proj$",
329
+ "re:^model[.]layers[.]37[.]self_attn[.]o_proj$",
330
+ "re:^model[.]layers[.]38[.]self_attn[.]k_proj$",
331
+ "re:^model[.]layers[.]38[.]self_attn[.]o_proj$",
332
+ "re:^model[.]layers[.]38[.]self_attn[.]q_proj$",
333
+ "re:^model[.]layers[.]38[.]self_attn[.]qkv_proj$",
334
+ "re:^model[.]layers[.]38[.]self_attn[.]v_proj$",
335
+ "re:^model[.]layers[.]39[.]self_attn[.]o_proj$",
336
+ "re:^model[.]layers[.]4[.]self_attn[.]k_proj$",
337
+ "re:^model[.]layers[.]4[.]self_attn[.]o_proj$",
338
+ "re:^model[.]layers[.]4[.]self_attn[.]q_proj$",
339
+ "re:^model[.]layers[.]4[.]self_attn[.]qkv_proj$",
340
+ "re:^model[.]layers[.]4[.]self_attn[.]v_proj$",
341
+ "re:^model[.]layers[.]40[.]self_attn[.]k_proj$",
342
+ "re:^model[.]layers[.]40[.]self_attn[.]o_proj$",
343
+ "re:^model[.]layers[.]40[.]self_attn[.]q_proj$",
344
+ "re:^model[.]layers[.]40[.]self_attn[.]qkv_proj$",
345
+ "re:^model[.]layers[.]40[.]self_attn[.]v_proj$",
346
+ "re:^model[.]layers[.]41[.]self_attn[.]k_proj$",
347
+ "re:^model[.]layers[.]41[.]self_attn[.]o_proj$",
348
+ "re:^model[.]layers[.]41[.]self_attn[.]q_proj$",
349
+ "re:^model[.]layers[.]41[.]self_attn[.]qkv_proj$",
350
+ "re:^model[.]layers[.]41[.]self_attn[.]v_proj$",
351
+ "re:^model[.]layers[.]42[.]self_attn[.]o_proj$",
352
+ "re:^model[.]layers[.]43[.]self_attn[.]k_proj$",
353
+ "re:^model[.]layers[.]43[.]self_attn[.]o_proj$",
354
+ "re:^model[.]layers[.]43[.]self_attn[.]q_proj$",
355
+ "re:^model[.]layers[.]43[.]self_attn[.]qkv_proj$",
356
+ "re:^model[.]layers[.]43[.]self_attn[.]v_proj$",
357
+ "re:^model[.]layers[.]44[.]self_attn[.]k_proj$",
358
+ "re:^model[.]layers[.]44[.]self_attn[.]o_proj$",
359
+ "re:^model[.]layers[.]44[.]self_attn[.]q_proj$",
360
+ "re:^model[.]layers[.]44[.]self_attn[.]qkv_proj$",
361
+ "re:^model[.]layers[.]44[.]self_attn[.]v_proj$",
362
+ "re:^model[.]layers[.]45[.]self_attn[.]k_proj$",
363
+ "re:^model[.]layers[.]45[.]self_attn[.]o_proj$",
364
+ "re:^model[.]layers[.]45[.]self_attn[.]q_proj$",
365
+ "re:^model[.]layers[.]45[.]self_attn[.]qkv_proj$",
366
+ "re:^model[.]layers[.]45[.]self_attn[.]v_proj$",
367
+ "re:^model[.]layers[.]46[.]self_attn[.]k_proj$",
368
+ "re:^model[.]layers[.]46[.]self_attn[.]o_proj$",
369
+ "re:^model[.]layers[.]46[.]self_attn[.]q_proj$",
370
+ "re:^model[.]layers[.]46[.]self_attn[.]qkv_proj$",
371
+ "re:^model[.]layers[.]46[.]self_attn[.]v_proj$",
372
+ "re:^model[.]layers[.]47[.]self_attn[.]k_proj$",
373
+ "re:^model[.]layers[.]47[.]self_attn[.]o_proj$",
374
+ "re:^model[.]layers[.]47[.]self_attn[.]q_proj$",
375
+ "re:^model[.]layers[.]47[.]self_attn[.]qkv_proj$",
376
+ "re:^model[.]layers[.]47[.]self_attn[.]v_proj$",
377
+ "re:^model[.]layers[.]48[.]self_attn[.]o_proj$",
378
+ "re:^model[.]layers[.]49[.]self_attn[.]o_proj$",
379
+ "re:^model[.]layers[.]5[.]self_attn[.]k_proj$",
380
+ "re:^model[.]layers[.]5[.]self_attn[.]o_proj$",
381
+ "re:^model[.]layers[.]5[.]self_attn[.]q_proj$",
382
+ "re:^model[.]layers[.]5[.]self_attn[.]qkv_proj$",
383
+ "re:^model[.]layers[.]5[.]self_attn[.]v_proj$",
384
+ "re:^model[.]layers[.]50[.]self_attn[.]k_proj$",
385
+ "re:^model[.]layers[.]50[.]self_attn[.]o_proj$",
386
+ "re:^model[.]layers[.]50[.]self_attn[.]q_proj$",
387
+ "re:^model[.]layers[.]50[.]self_attn[.]qkv_proj$",
388
+ "re:^model[.]layers[.]50[.]self_attn[.]v_proj$",
389
+ "re:^model[.]layers[.]51[.]self_attn[.]k_proj$",
390
+ "re:^model[.]layers[.]51[.]self_attn[.]o_proj$",
391
+ "re:^model[.]layers[.]51[.]self_attn[.]q_proj$",
392
+ "re:^model[.]layers[.]51[.]self_attn[.]qkv_proj$",
393
+ "re:^model[.]layers[.]51[.]self_attn[.]v_proj$",
394
+ "re:^model[.]layers[.]52[.]self_attn[.]k_proj$",
395
+ "re:^model[.]layers[.]52[.]self_attn[.]o_proj$",
396
+ "re:^model[.]layers[.]52[.]self_attn[.]q_proj$",
397
+ "re:^model[.]layers[.]52[.]self_attn[.]qkv_proj$",
398
+ "re:^model[.]layers[.]52[.]self_attn[.]v_proj$",
399
+ "re:^model[.]layers[.]53[.]self_attn[.]k_proj$",
400
+ "re:^model[.]layers[.]53[.]self_attn[.]o_proj$",
401
+ "re:^model[.]layers[.]53[.]self_attn[.]q_proj$",
402
+ "re:^model[.]layers[.]53[.]self_attn[.]qkv_proj$",
403
+ "re:^model[.]layers[.]53[.]self_attn[.]v_proj$",
404
+ "re:^model[.]layers[.]54[.]self_attn[.]k_proj$",
405
+ "re:^model[.]layers[.]54[.]self_attn[.]o_proj$",
406
+ "re:^model[.]layers[.]54[.]self_attn[.]q_proj$",
407
+ "re:^model[.]layers[.]54[.]self_attn[.]qkv_proj$",
408
+ "re:^model[.]layers[.]54[.]self_attn[.]v_proj$",
409
+ "re:^model[.]layers[.]55[.]self_attn[.]k_proj$",
410
+ "re:^model[.]layers[.]55[.]self_attn[.]o_proj$",
411
+ "re:^model[.]layers[.]55[.]self_attn[.]q_proj$",
412
+ "re:^model[.]layers[.]55[.]self_attn[.]qkv_proj$",
413
+ "re:^model[.]layers[.]55[.]self_attn[.]v_proj$",
414
+ "re:^model[.]layers[.]56[.]self_attn[.]k_proj$",
415
+ "re:^model[.]layers[.]56[.]self_attn[.]o_proj$",
416
+ "re:^model[.]layers[.]56[.]self_attn[.]q_proj$",
417
+ "re:^model[.]layers[.]56[.]self_attn[.]qkv_proj$",
418
+ "re:^model[.]layers[.]56[.]self_attn[.]v_proj$",
419
+ "re:^model[.]layers[.]57[.]self_attn[.]k_proj$",
420
+ "re:^model[.]layers[.]57[.]self_attn[.]o_proj$",
421
+ "re:^model[.]layers[.]57[.]self_attn[.]q_proj$",
422
+ "re:^model[.]layers[.]57[.]self_attn[.]qkv_proj$",
423
+ "re:^model[.]layers[.]57[.]self_attn[.]v_proj$",
424
+ "re:^model[.]layers[.]58[.]self_attn[.]k_proj$",
425
+ "re:^model[.]layers[.]58[.]self_attn[.]o_proj$",
426
+ "re:^model[.]layers[.]58[.]self_attn[.]q_proj$",
427
+ "re:^model[.]layers[.]58[.]self_attn[.]qkv_proj$",
428
+ "re:^model[.]layers[.]58[.]self_attn[.]v_proj$",
429
+ "re:^model[.]layers[.]59[.]self_attn[.]k_proj$",
430
+ "re:^model[.]layers[.]59[.]self_attn[.]o_proj$",
431
+ "re:^model[.]layers[.]59[.]self_attn[.]q_proj$",
432
+ "re:^model[.]layers[.]59[.]self_attn[.]qkv_proj$",
433
+ "re:^model[.]layers[.]59[.]self_attn[.]v_proj$",
434
+ "re:^model[.]layers[.]6[.]self_attn[.]k_proj$",
435
+ "re:^model[.]layers[.]6[.]self_attn[.]o_proj$",
436
+ "re:^model[.]layers[.]6[.]self_attn[.]q_proj$",
437
+ "re:^model[.]layers[.]6[.]self_attn[.]qkv_proj$",
438
+ "re:^model[.]layers[.]6[.]self_attn[.]v_proj$",
439
+ "re:^model[.]layers[.]60[.]self_attn[.]k_proj$",
440
+ "re:^model[.]layers[.]60[.]self_attn[.]o_proj$",
441
+ "re:^model[.]layers[.]60[.]self_attn[.]q_proj$",
442
+ "re:^model[.]layers[.]60[.]self_attn[.]qkv_proj$",
443
+ "re:^model[.]layers[.]60[.]self_attn[.]v_proj$",
444
+ "re:^model[.]layers[.]61[.]self_attn[.]k_proj$",
445
+ "re:^model[.]layers[.]61[.]self_attn[.]o_proj$",
446
+ "re:^model[.]layers[.]61[.]self_attn[.]q_proj$",
447
+ "re:^model[.]layers[.]61[.]self_attn[.]qkv_proj$",
448
+ "re:^model[.]layers[.]61[.]self_attn[.]v_proj$",
449
+ "re:^model[.]layers[.]7[.]self_attn[.]k_proj$",
450
+ "re:^model[.]layers[.]7[.]self_attn[.]o_proj$",
451
+ "re:^model[.]layers[.]7[.]self_attn[.]q_proj$",
452
+ "re:^model[.]layers[.]7[.]self_attn[.]qkv_proj$",
453
+ "re:^model[.]layers[.]7[.]self_attn[.]v_proj$",
454
+ "re:^model[.]layers[.]8[.]self_attn[.]k_proj$",
455
+ "re:^model[.]layers[.]8[.]self_attn[.]o_proj$",
456
+ "re:^model[.]layers[.]8[.]self_attn[.]q_proj$",
457
+ "re:^model[.]layers[.]8[.]self_attn[.]qkv_proj$",
458
+ "re:^model[.]layers[.]8[.]self_attn[.]v_proj$",
459
+ "re:^model[.]layers[.]9[.]self_attn[.]k_proj$",
460
+ "re:^model[.]layers[.]9[.]self_attn[.]o_proj$",
461
+ "re:^model[.]layers[.]9[.]self_attn[.]q_proj$",
462
+ "re:^model[.]layers[.]9[.]self_attn[.]qkv_proj$",
463
+ "re:^model[.]layers[.]9[.]self_attn[.]v_proj$",
464
+ "re:^model[.]layers[.]10[.]mlp[.]experts[.][0-9]+[.]down_proj$",
465
+ "re:^model[.]layers[.]10[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
466
+ "re:^model[.]layers[.]10[.]mlp[.]experts[.][0-9]+[.]up_proj$",
467
+ "re:^model[.]layers[.]11[.]mlp[.]experts[.][0-9]+[.]down_proj$",
468
+ "re:^model[.]layers[.]11[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
469
+ "re:^model[.]layers[.]11[.]mlp[.]experts[.][0-9]+[.]up_proj$",
470
+ "re:^model[.]layers[.]12[.]mlp[.]experts[.][0-9]+[.]down_proj$",
471
+ "re:^model[.]layers[.]12[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
472
+ "re:^model[.]layers[.]12[.]mlp[.]experts[.][0-9]+[.]up_proj$",
473
+ "re:^model[.]layers[.]13[.]mlp[.]experts[.][0-9]+[.]down_proj$",
474
+ "re:^model[.]layers[.]13[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
475
+ "re:^model[.]layers[.]13[.]mlp[.]experts[.][0-9]+[.]up_proj$",
476
+ "re:^model[.]layers[.]14[.]mlp[.]experts[.][0-9]+[.]down_proj$",
477
+ "re:^model[.]layers[.]14[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
478
+ "re:^model[.]layers[.]14[.]mlp[.]experts[.][0-9]+[.]up_proj$",
479
+ "re:^model[.]layers[.]15[.]mlp[.]experts[.][0-9]+[.]down_proj$",
480
+ "re:^model[.]layers[.]15[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
481
+ "re:^model[.]layers[.]15[.]mlp[.]experts[.][0-9]+[.]up_proj$",
482
+ "re:^model[.]layers[.]16[.]mlp[.]experts[.][0-9]+[.]down_proj$",
483
+ "re:^model[.]layers[.]16[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
484
+ "re:^model[.]layers[.]16[.]mlp[.]experts[.][0-9]+[.]up_proj$",
485
+ "re:^model[.]layers[.]17[.]mlp[.]experts[.][0-9]+[.]down_proj$",
486
+ "re:^model[.]layers[.]17[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
487
+ "re:^model[.]layers[.]17[.]mlp[.]experts[.][0-9]+[.]up_proj$",
488
+ "re:^model[.]layers[.]18[.]mlp[.]experts[.][0-9]+[.]down_proj$",
489
+ "re:^model[.]layers[.]18[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
490
+ "re:^model[.]layers[.]18[.]mlp[.]experts[.][0-9]+[.]up_proj$",
491
+ "re:^model[.]layers[.]19[.]mlp[.]experts[.][0-9]+[.]down_proj$",
492
+ "re:^model[.]layers[.]19[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
493
+ "re:^model[.]layers[.]19[.]mlp[.]experts[.][0-9]+[.]up_proj$",
494
+ "re:^model[.]layers[.]20[.]mlp[.]experts[.][0-9]+[.]down_proj$",
495
+ "re:^model[.]layers[.]20[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
496
+ "re:^model[.]layers[.]20[.]mlp[.]experts[.][0-9]+[.]up_proj$",
497
+ "re:^model[.]layers[.]21[.]mlp[.]experts[.][0-9]+[.]down_proj$",
498
+ "re:^model[.]layers[.]21[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
499
+ "re:^model[.]layers[.]21[.]mlp[.]experts[.][0-9]+[.]up_proj$",
500
+ "re:^model[.]layers[.]22[.]mlp[.]experts[.][0-9]+[.]down_proj$",
501
+ "re:^model[.]layers[.]22[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
502
+ "re:^model[.]layers[.]22[.]mlp[.]experts[.][0-9]+[.]up_proj$",
503
+ "re:^model[.]layers[.]23[.]mlp[.]experts[.][0-9]+[.]down_proj$",
504
+ "re:^model[.]layers[.]23[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
505
+ "re:^model[.]layers[.]23[.]mlp[.]experts[.][0-9]+[.]up_proj$",
506
+ "re:^model[.]layers[.]24[.]mlp[.]experts[.][0-9]+[.]down_proj$",
507
+ "re:^model[.]layers[.]24[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
508
+ "re:^model[.]layers[.]24[.]mlp[.]experts[.][0-9]+[.]up_proj$",
509
+ "re:^model[.]layers[.]25[.]mlp[.]experts[.][0-9]+[.]down_proj$",
510
+ "re:^model[.]layers[.]25[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
511
+ "re:^model[.]layers[.]25[.]mlp[.]experts[.][0-9]+[.]up_proj$",
512
+ "re:^model[.]layers[.]26[.]mlp[.]experts[.][0-9]+[.]down_proj$",
513
+ "re:^model[.]layers[.]26[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
514
+ "re:^model[.]layers[.]26[.]mlp[.]experts[.][0-9]+[.]up_proj$",
515
+ "re:^model[.]layers[.]27[.]mlp[.]experts[.][0-9]+[.]down_proj$",
516
+ "re:^model[.]layers[.]27[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
517
+ "re:^model[.]layers[.]27[.]mlp[.]experts[.][0-9]+[.]up_proj$",
518
+ "re:^model[.]layers[.]28[.]mlp[.]experts[.][0-9]+[.]down_proj$",
519
+ "re:^model[.]layers[.]28[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
520
+ "re:^model[.]layers[.]28[.]mlp[.]experts[.][0-9]+[.]up_proj$",
521
+ "re:^model[.]layers[.]29[.]mlp[.]experts[.][0-9]+[.]down_proj$",
522
+ "re:^model[.]layers[.]29[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
523
+ "re:^model[.]layers[.]29[.]mlp[.]experts[.][0-9]+[.]up_proj$",
524
+ "re:^model[.]layers[.]2[.]mlp[.]experts[.][0-9]+[.]down_proj$",
525
+ "re:^model[.]layers[.]2[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
526
+ "re:^model[.]layers[.]2[.]mlp[.]experts[.][0-9]+[.]up_proj$",
527
+ "re:^model[.]layers[.]30[.]mlp[.]experts[.][0-9]+[.]down_proj$",
528
+ "re:^model[.]layers[.]30[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
529
+ "re:^model[.]layers[.]30[.]mlp[.]experts[.][0-9]+[.]up_proj$",
530
+ "re:^model[.]layers[.]31[.]mlp[.]experts[.][0-9]+[.]down_proj$",
531
+ "re:^model[.]layers[.]31[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
532
+ "re:^model[.]layers[.]31[.]mlp[.]experts[.][0-9]+[.]up_proj$",
533
+ "re:^model[.]layers[.]32[.]mlp[.]experts[.][0-9]+[.]down_proj$",
534
+ "re:^model[.]layers[.]32[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
535
+ "re:^model[.]layers[.]32[.]mlp[.]experts[.][0-9]+[.]up_proj$",
536
+ "re:^model[.]layers[.]33[.]mlp[.]experts[.][0-9]+[.]down_proj$",
537
+ "re:^model[.]layers[.]33[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
538
+ "re:^model[.]layers[.]33[.]mlp[.]experts[.][0-9]+[.]up_proj$",
539
+ "re:^model[.]layers[.]34[.]mlp[.]experts[.][0-9]+[.]down_proj$",
540
+ "re:^model[.]layers[.]34[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
541
+ "re:^model[.]layers[.]34[.]mlp[.]experts[.][0-9]+[.]up_proj$",
542
+ "re:^model[.]layers[.]35[.]mlp[.]experts[.][0-9]+[.]down_proj$",
543
+ "re:^model[.]layers[.]35[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
544
+ "re:^model[.]layers[.]35[.]mlp[.]experts[.][0-9]+[.]up_proj$",
545
+ "re:^model[.]layers[.]36[.]mlp[.]experts[.][0-9]+[.]down_proj$",
546
+ "re:^model[.]layers[.]36[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
547
+ "re:^model[.]layers[.]36[.]mlp[.]experts[.][0-9]+[.]up_proj$",
548
+ "re:^model[.]layers[.]37[.]mlp[.]experts[.][0-9]+[.]down_proj$",
549
+ "re:^model[.]layers[.]37[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
550
+ "re:^model[.]layers[.]37[.]mlp[.]experts[.][0-9]+[.]up_proj$",
551
+ "re:^model[.]layers[.]38[.]mlp[.]experts[.][0-9]+[.]down_proj$",
552
+ "re:^model[.]layers[.]38[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
553
+ "re:^model[.]layers[.]38[.]mlp[.]experts[.][0-9]+[.]up_proj$",
554
+ "re:^model[.]layers[.]39[.]mlp[.]experts[.][0-9]+[.]down_proj$",
555
+ "re:^model[.]layers[.]39[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
556
+ "re:^model[.]layers[.]39[.]mlp[.]experts[.][0-9]+[.]up_proj$",
557
+ "re:^model[.]layers[.]3[.]mlp[.]experts[.][0-9]+[.]down_proj$",
558
+ "re:^model[.]layers[.]3[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
559
+ "re:^model[.]layers[.]3[.]mlp[.]experts[.][0-9]+[.]up_proj$",
560
+ "re:^model[.]layers[.]40[.]mlp[.]experts[.][0-9]+[.]down_proj$",
561
+ "re:^model[.]layers[.]40[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
562
+ "re:^model[.]layers[.]40[.]mlp[.]experts[.][0-9]+[.]up_proj$",
563
+ "re:^model[.]layers[.]41[.]mlp[.]experts[.][0-9]+[.]down_proj$",
564
+ "re:^model[.]layers[.]41[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
565
+ "re:^model[.]layers[.]41[.]mlp[.]experts[.][0-9]+[.]up_proj$",
566
+ "re:^model[.]layers[.]42[.]mlp[.]experts[.][0-9]+[.]down_proj$",
567
+ "re:^model[.]layers[.]42[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
568
+ "re:^model[.]layers[.]42[.]mlp[.]experts[.][0-9]+[.]up_proj$",
569
+ "re:^model[.]layers[.]43[.]mlp[.]experts[.][0-9]+[.]down_proj$",
570
+ "re:^model[.]layers[.]43[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
571
+ "re:^model[.]layers[.]43[.]mlp[.]experts[.][0-9]+[.]up_proj$",
572
+ "re:^model[.]layers[.]44[.]mlp[.]experts[.][0-9]+[.]down_proj$",
573
+ "re:^model[.]layers[.]44[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
574
+ "re:^model[.]layers[.]44[.]mlp[.]experts[.][0-9]+[.]up_proj$",
575
+ "re:^model[.]layers[.]45[.]mlp[.]experts[.][0-9]+[.]down_proj$",
576
+ "re:^model[.]layers[.]45[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
577
+ "re:^model[.]layers[.]45[.]mlp[.]experts[.][0-9]+[.]up_proj$",
578
+ "re:^model[.]layers[.]46[.]mlp[.]experts[.][0-9]+[.]down_proj$",
579
+ "re:^model[.]layers[.]46[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
580
+ "re:^model[.]layers[.]46[.]mlp[.]experts[.][0-9]+[.]up_proj$",
581
+ "re:^model[.]layers[.]47[.]mlp[.]experts[.][0-9]+[.]down_proj$",
582
+ "re:^model[.]layers[.]47[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
583
+ "re:^model[.]layers[.]47[.]mlp[.]experts[.][0-9]+[.]up_proj$",
584
+ "re:^model[.]layers[.]48[.]mlp[.]experts[.][0-9]+[.]down_proj$",
585
+ "re:^model[.]layers[.]48[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
586
+ "re:^model[.]layers[.]48[.]mlp[.]experts[.][0-9]+[.]up_proj$",
587
+ "re:^model[.]layers[.]49[.]mlp[.]experts[.][0-9]+[.]down_proj$",
588
+ "re:^model[.]layers[.]49[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
589
+ "re:^model[.]layers[.]49[.]mlp[.]experts[.][0-9]+[.]up_proj$",
590
+ "re:^model[.]layers[.]4[.]mlp[.]experts[.][0-9]+[.]down_proj$",
591
+ "re:^model[.]layers[.]4[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
592
+ "re:^model[.]layers[.]4[.]mlp[.]experts[.][0-9]+[.]up_proj$",
593
+ "re:^model[.]layers[.]50[.]mlp[.]experts[.][0-9]+[.]down_proj$",
594
+ "re:^model[.]layers[.]50[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
595
+ "re:^model[.]layers[.]50[.]mlp[.]experts[.][0-9]+[.]up_proj$",
596
+ "re:^model[.]layers[.]51[.]mlp[.]experts[.][0-9]+[.]down_proj$",
597
+ "re:^model[.]layers[.]51[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
598
+ "re:^model[.]layers[.]51[.]mlp[.]experts[.][0-9]+[.]up_proj$",
599
+ "re:^model[.]layers[.]52[.]mlp[.]experts[.][0-9]+[.]down_proj$",
600
+ "re:^model[.]layers[.]52[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
601
+ "re:^model[.]layers[.]52[.]mlp[.]experts[.][0-9]+[.]up_proj$",
602
+ "re:^model[.]layers[.]53[.]mlp[.]experts[.][0-9]+[.]down_proj$",
603
+ "re:^model[.]layers[.]53[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
604
+ "re:^model[.]layers[.]53[.]mlp[.]experts[.][0-9]+[.]up_proj$",
605
+ "re:^model[.]layers[.]54[.]mlp[.]experts[.][0-9]+[.]down_proj$",
606
+ "re:^model[.]layers[.]54[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
607
+ "re:^model[.]layers[.]54[.]mlp[.]experts[.][0-9]+[.]up_proj$",
608
+ "re:^model[.]layers[.]55[.]mlp[.]experts[.][0-9]+[.]down_proj$",
609
+ "re:^model[.]layers[.]55[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
610
+ "re:^model[.]layers[.]55[.]mlp[.]experts[.][0-9]+[.]up_proj$",
611
+ "re:^model[.]layers[.]56[.]mlp[.]experts[.][0-9]+[.]down_proj$",
612
+ "re:^model[.]layers[.]56[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
613
+ "re:^model[.]layers[.]56[.]mlp[.]experts[.][0-9]+[.]up_proj$",
614
+ "re:^model[.]layers[.]57[.]mlp[.]experts[.][0-9]+[.]down_proj$",
615
+ "re:^model[.]layers[.]57[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
616
+ "re:^model[.]layers[.]57[.]mlp[.]experts[.][0-9]+[.]up_proj$",
617
+ "re:^model[.]layers[.]58[.]mlp[.]experts[.][0-9]+[.]down_proj$",
618
+ "re:^model[.]layers[.]58[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
619
+ "re:^model[.]layers[.]58[.]mlp[.]experts[.][0-9]+[.]up_proj$",
620
+ "re:^model[.]layers[.]59[.]mlp[.]experts[.][0-9]+[.]down_proj$",
621
+ "re:^model[.]layers[.]59[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
622
+ "re:^model[.]layers[.]59[.]mlp[.]experts[.][0-9]+[.]up_proj$",
623
+ "re:^model[.]layers[.]60[.]mlp[.]experts[.][0-9]+[.]down_proj$",
624
+ "re:^model[.]layers[.]60[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
625
+ "re:^model[.]layers[.]60[.]mlp[.]experts[.][0-9]+[.]up_proj$",
626
+ "re:^model[.]layers[.]61[.]mlp[.]experts[.][0-9]+[.]down_proj$",
627
+ "re:^model[.]layers[.]61[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
628
+ "re:^model[.]layers[.]61[.]mlp[.]experts[.][0-9]+[.]up_proj$",
629
+ "re:^model[.]layers[.]6[.]mlp[.]experts[.][0-9]+[.]down_proj$",
630
+ "re:^model[.]layers[.]6[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
631
+ "re:^model[.]layers[.]6[.]mlp[.]experts[.][0-9]+[.]up_proj$",
632
+ "re:^model[.]layers[.]8[.]mlp[.]experts[.][0-9]+[.]down_proj$",
633
+ "re:^model[.]layers[.]8[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
634
+ "re:^model[.]layers[.]8[.]mlp[.]experts[.][0-9]+[.]up_proj$",
635
+ "re:^model[.]layers[.]9[.]mlp[.]experts[.][0-9]+[.]down_proj$",
636
+ "re:^model[.]layers[.]9[.]mlp[.]experts[.][0-9]+[.]gate_proj$",
637
+ "re:^model[.]layers[.]9[.]mlp[.]experts[.][0-9]+[.]up_proj$",
638
+ "re:^model[.]layers[.][0-9]+[.]mlp[.]experts[.][0-9]+[.](gate|up|down)_proj$"
639
+ ]
640
+ }
641
+ },
642
+ "ignore": [
643
+ "lm_head",
644
+ "model.embed_tokens",
645
+ "model.layers.0.mlp.gate",
646
+ "model.layers.1.mlp.gate",
647
+ "model.layers.10.mlp.gate",
648
+ "model.layers.11.mlp.gate",
649
+ "model.layers.12.mlp.gate",
650
+ "model.layers.13.mlp.gate",
651
+ "model.layers.14.mlp.gate",
652
+ "model.layers.15.mlp.gate",
653
+ "model.layers.16.mlp.gate",
654
+ "model.layers.17.mlp.gate",
655
+ "model.layers.18.mlp.gate",
656
+ "model.layers.19.mlp.gate",
657
+ "model.layers.2.mlp.gate",
658
+ "model.layers.20.mlp.gate",
659
+ "model.layers.21.mlp.gate",
660
+ "model.layers.22.mlp.gate",
661
+ "model.layers.23.mlp.gate",
662
+ "model.layers.24.mlp.gate",
663
+ "model.layers.25.mlp.gate",
664
+ "model.layers.26.mlp.gate",
665
+ "model.layers.27.mlp.gate",
666
+ "model.layers.28.mlp.gate",
667
+ "model.layers.29.mlp.gate",
668
+ "model.layers.3.mlp.gate",
669
+ "model.layers.30.mlp.gate",
670
+ "model.layers.31.mlp.gate",
671
+ "model.layers.32.mlp.gate",
672
+ "model.layers.33.mlp.gate",
673
+ "model.layers.34.mlp.gate",
674
+ "model.layers.35.mlp.gate",
675
+ "model.layers.36.mlp.gate",
676
+ "model.layers.37.mlp.gate",
677
+ "model.layers.38.mlp.gate",
678
+ "model.layers.39.mlp.gate",
679
+ "model.layers.4.mlp.gate",
680
+ "model.layers.40.mlp.gate",
681
+ "model.layers.41.mlp.gate",
682
+ "model.layers.42.mlp.gate",
683
+ "model.layers.43.mlp.gate",
684
+ "model.layers.44.mlp.gate",
685
+ "model.layers.45.mlp.gate",
686
+ "model.layers.46.mlp.gate",
687
+ "model.layers.47.mlp.gate",
688
+ "model.layers.48.mlp.gate",
689
+ "model.layers.49.mlp.gate",
690
+ "model.layers.5.mlp.gate",
691
+ "model.layers.50.mlp.gate",
692
+ "model.layers.51.mlp.gate",
693
+ "model.layers.52.mlp.gate",
694
+ "model.layers.53.mlp.gate",
695
+ "model.layers.54.mlp.gate",
696
+ "model.layers.55.mlp.gate",
697
+ "model.layers.56.mlp.gate",
698
+ "model.layers.57.mlp.gate",
699
+ "model.layers.58.mlp.gate",
700
+ "model.layers.59.mlp.gate",
701
+ "model.layers.6.mlp.gate",
702
+ "model.layers.60.mlp.gate",
703
+ "model.layers.61.mlp.gate",
704
+ "model.layers.7.mlp.gate",
705
+ "model.layers.8.mlp.gate",
706
+ "model.layers.9.mlp.gate"
707
+ ],
708
+ "quantization_status": "compressed"
709
+ },
710
+ "rms_norm_eps": 1e-06,
711
+ "rope_theta": 5000000,
712
+ "rotary_dim": 64,
713
+ "scoring_func": "sigmoid",
714
+ "shared_intermediate_size": 0,
715
+ "tie_word_embeddings": false,
716
+ "transformers_version": "4.46.1",
717
+ "use_cache": true,
718
+ "use_mtp": true,
719
+ "use_qk_norm": true,
720
+ "use_routing_bias": true,
721
+ "vocab_size": 200064
722
+ }
configuration_minimax_m2.py ADDED
@@ -0,0 +1,200 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
2
+ # This file was automatically generated from src/transformers/models/minimax_m2/modular_minimax_m2.py.
3
+ # Do NOT edit this file manually as any edits will be overwritten by the generation of
4
+ # the file from the modular. If any change should be done, please apply the change to the
5
+ # modular_minimax_m2.py file directly. One of our CI enforces this.
6
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
7
+ # coding=utf-8
8
+ # Copyright 2025 the HuggingFace Team. All rights reserved.
9
+ #
10
+ # Licensed under the Apache License, Version 2.0 (the "License");
11
+ # you may not use this file except in compliance with the License.
12
+ # You may obtain a copy of the License at
13
+ #
14
+ # http://www.apache.org/licenses/LICENSE-2.0
15
+ #
16
+ # Unless required by applicable law or agreed to in writing, software
17
+ # distributed under the License is distributed on an "AS IS" BASIS,
18
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
19
+ # See the License for the specific language governing permissions and
20
+ # limitations under the License.
21
+
22
+
23
+ from transformers.configuration_utils import PretrainedConfig
24
+
25
+
26
+ class MiniMaxM2Config(PretrainedConfig):
27
+ r"""
28
+ This is the configuration class to store the configuration of a [`MiniMaxM2Model`]. It is used to instantiate an
29
+ MiniMaxM2 model according to the specified arguments, defining the model architecture. Instantiating a configuration
30
+ with the defaults will yield a similar configuration to that of the MiniMaxM2-7B-v0.1 or MiniMaxM2-7B-Instruct-v0.1.
31
+
32
+ [minimax_m2ai/MiniMaxM2-8x7B](https://huggingface.co/minimax_m2ai/MiniMaxM2-8x7B)
33
+ [minimax_m2ai/MiniMaxM2-7B-Instruct-v0.1](https://huggingface.co/minimax_m2ai/MiniMaxM2-7B-Instruct-v0.1)
34
+
35
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
36
+ documentation from [`PretrainedConfig`] for more information.
37
+
38
+
39
+ Args:
40
+ vocab_size (`int`, *optional*, defaults to 32000):
41
+ Vocabulary size of the MiniMaxM2 model. Defines the number of different tokens that can be represented by the
42
+ `inputs_ids` passed when calling [`MiniMaxM2Model`]
43
+ hidden_size (`int`, *optional*, defaults to 4096):
44
+ Dimension of the hidden representations.
45
+ intermediate_size (`int`, *optional*, defaults to 14336):
46
+ Dimension of the MLP representations.
47
+ num_hidden_layers (`int`, *optional*, defaults to 32):
48
+ Number of hidden layers in the Transformer encoder.
49
+ num_attention_heads (`int`, *optional*, defaults to 32):
50
+ Number of attention heads for each attention layer in the Transformer encoder.
51
+ num_key_value_heads (`int`, *optional*, defaults to 8):
52
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
53
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
54
+ `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
55
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
56
+ by meanpooling all the original heads within that group. For more details, check out [this
57
+ paper](https://huggingface.co/papers/2305.13245). If it is not specified, will default to `8`.
58
+ head_dim (`int`, *optional*, defaults to `hidden_size // num_attention_heads`):
59
+ The attention head dimension.
60
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
61
+ The non-linear activation function (function or string) in the decoder.
62
+ max_position_embeddings (`int`, *optional*, defaults to `4096*32`):
63
+ The maximum sequence length that this model might ever be used with. MiniMaxM2's sliding window attention
64
+ allows sequence of up to 4096*32 tokens.
65
+ initializer_range (`float`, *optional*, defaults to 0.02):
66
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
67
+ rms_norm_eps (`float`, *optional*, defaults to 1e-05):
68
+ The epsilon used by the rms normalization layers.
69
+ use_cache (`bool`, *optional*, defaults to `True`):
70
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
71
+ relevant if `config.is_decoder=True`.
72
+ pad_token_id (`int`, *optional*):
73
+ The id of the padding token.
74
+ bos_token_id (`int`, *optional*, defaults to 1):
75
+ The id of the "beginning-of-sequence" token.
76
+ eos_token_id (`int`, *optional*, defaults to 2):
77
+ The id of the "end-of-sequence" token.
78
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
79
+ Whether the model's input and output word embeddings should be tied.
80
+ rope_theta (`float`, *optional*, defaults to 1000000.0):
81
+ The base period of the RoPE embeddings.
82
+ sliding_window (`int`, *optional*):
83
+ Sliding window attention window size. If not specified, will default to `4096`.
84
+ attention_dropout (`float`, *optional*, defaults to 0.0):
85
+ The dropout ratio for the attention probabilities.
86
+ num_experts_per_tok (`int`, *optional*, defaults to 2):
87
+ The number of experts to route per-token, can be also interpreted as the `top-k` routing
88
+ parameter
89
+ num_local_experts (`int`, *optional*, defaults to 8):
90
+ Number of experts per Sparse MLP layer.
91
+ output_router_logits (`bool`, *optional*, defaults to `False`):
92
+ Whether or not the router logits should be returned by the model. Enabling this will also
93
+ allow the model to output the auxiliary loss. See [here]() for more details
94
+ router_aux_loss_coef (`float`, *optional*, defaults to 0.001):
95
+ The aux loss factor for the total loss.
96
+ router_jitter_noise (`float`, *optional*, defaults to 0.0):
97
+ Amount of noise to add to the router.
98
+
99
+ ```python
100
+ >>> from transformers import MiniMaxM2Model, MiniMaxM2Config
101
+
102
+ >>> # Initializing a MiniMaxM2 7B style configuration
103
+ >>> configuration = MiniMaxM2Config()
104
+
105
+ >>> # Initializing a model from the MiniMaxM2 7B style configuration
106
+ >>> model = MiniMaxM2Model(configuration)
107
+
108
+ >>> # Accessing the model configuration
109
+ >>> configuration = model.config
110
+ ```"""
111
+
112
+ model_type = "minimax_m2"
113
+ keys_to_ignore_at_inference = ["past_key_values"]
114
+ base_model_tp_plan = {
115
+ "layers.*.self_attn.q_proj": "colwise",
116
+ "layers.*.self_attn.k_proj": "colwise",
117
+ "layers.*.self_attn.v_proj": "colwise",
118
+ "layers.*.self_attn.o_proj": "rowwise",
119
+ "layers.*.block_sparse_moe.gate": "colwise_rep", # we need to replicate here to correctly route experts
120
+ "layers.*.block_sparse_moe.experts.*.w1": "colwise",
121
+ "layers.*.block_sparse_moe.experts.*.w2": "rowwise",
122
+ "layers.*.block_sparse_moe.experts.*.w3": "colwise",
123
+ }
124
+ base_model_pp_plan = {
125
+ "embed_tokens": (["input_ids"], ["inputs_embeds"]),
126
+ "layers": (["hidden_states", "attention_mask"], ["hidden_states"]),
127
+ "norm": (["hidden_states"], ["hidden_states"]),
128
+ }
129
+
130
+ def __init__(
131
+ self,
132
+ vocab_size=32000,
133
+ hidden_size=4096,
134
+ intermediate_size=14336,
135
+ num_hidden_layers=32,
136
+ num_attention_heads=32,
137
+ num_key_value_heads=8,
138
+ head_dim=None,
139
+ hidden_act="silu",
140
+ max_position_embeddings=4096 * 32,
141
+ initializer_range=0.02,
142
+ rms_norm_eps=1e-5,
143
+ use_cache=True,
144
+ pad_token_id=None,
145
+ bos_token_id=1,
146
+ eos_token_id=2,
147
+ tie_word_embeddings=False,
148
+ rope_theta=1e6,
149
+ sliding_window=None,
150
+ attention_dropout=0.0,
151
+ num_experts_per_tok=2,
152
+ num_local_experts=8,
153
+ output_router_logits=False,
154
+ router_aux_loss_coef=0.001,
155
+ router_jitter_noise=0.0,
156
+ **kwargs,
157
+ ):
158
+ self.vocab_size = vocab_size
159
+ self.max_position_embeddings = max_position_embeddings
160
+ self.hidden_size = hidden_size
161
+ self.intermediate_size = intermediate_size
162
+ self.num_hidden_layers = num_hidden_layers
163
+ self.num_attention_heads = num_attention_heads
164
+ self.sliding_window = sliding_window
165
+
166
+ # for backward compatibility
167
+ if num_key_value_heads is None:
168
+ num_key_value_heads = num_attention_heads
169
+
170
+ self.num_key_value_heads = num_key_value_heads
171
+ self.hidden_act = hidden_act
172
+ self.initializer_range = initializer_range
173
+ self.rms_norm_eps = rms_norm_eps
174
+ self.use_cache = use_cache
175
+ self.rope_theta = rope_theta
176
+ self.attention_dropout = attention_dropout
177
+ self.head_dim = head_dim
178
+
179
+ self.num_experts_per_tok = num_experts_per_tok
180
+ self.num_local_experts = num_local_experts
181
+ self.output_router_logits = output_router_logits
182
+ self.router_aux_loss_coef = router_aux_loss_coef
183
+ self.router_jitter_noise = router_jitter_noise
184
+
185
+ self.use_qk_norm = kwargs.pop("use_qk_norm", False)
186
+ self.rotary_dim = kwargs.pop("rotary_dim", self.head_dim)
187
+ self.partial_rotary_factor = kwargs.pop("partial_rotary_factor", 1)
188
+ if self.head_dim is not None:
189
+ self.partial_rotary_factor = self.rotary_dim / self.head_dim
190
+
191
+ super().__init__(
192
+ pad_token_id=pad_token_id,
193
+ bos_token_id=bos_token_id,
194
+ eos_token_id=eos_token_id,
195
+ tie_word_embeddings=tie_word_embeddings,
196
+ **kwargs,
197
+ )
198
+
199
+
200
+ __all__ = ["MiniMaxM2Config"]
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 200019,
3
+ "do_sample": true,
4
+ "eos_token_id": 200020,
5
+ "temperature": 1.0,
6
+ "top_p": 0.95,
7
+ "top_k": 40,
8
+ "transformers_version": "4.46.1"
9
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
mixed_native_manifest.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "source_model": "/hfcache/hub/models--MiniMaxAI--MiniMax-M2.7/snapshots/d494266a4affc0d2995ba1fa35c8481cbd84294b",
3
+ "source_recipe": "/work/artifacts/layer_config_prune_t320.json",
4
+ "format_histogram": {
5
+ "head_passthrough/BF16": 3,
6
+ "linear/FP8_SOURCE": 2204,
7
+ "linear/BF16_router_shrunk": 62,
8
+ "linear/PRUNED": 14880,
9
+ "layer_passthrough/BF16": 248,
10
+ "layer_buffer_shrunk/BF16": 62,
11
+ "linear/NVFP4": 30780
12
+ },
13
+ "n_assignment_entries": 32984,
14
+ "ignore": [
15
+ "lm_head"
16
+ ],
17
+ "prune": {
18
+ "n_layers_pruned": 62,
19
+ "n_experts_orig_total": 15872,
20
+ "n_experts_kept_total": 10912,
21
+ "manifest_file": "prune_manifest.json"
22
+ }
23
+ }
model-00001-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1752e5fc0bbb9039e2b68f95e3cdf6635b09436c0c992b34b2add1212f91903
3
+ size 5368595808
model-00002-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4627caae229f91a784404cea5f208207e2716589fb43a6f73e110ce18d44c358
3
+ size 5367391056
model-00003-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10cbb451d5436387ac69d5d632c9ec6cf0481014850793e0ee5ab477e3608999
3
+ size 5365284976
model-00004-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de3cfc91b0f8a18e306dc7d19b7c43d1d49e4dd2cd41fe15b03517de33c7c8e9
3
+ size 5367878440
model-00005-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa18b2c61b863d5e10cd442f6ecd56fff257ae1bffad34c4c03fa1c0e0455068
3
+ size 5363845816
model-00006-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47dfed04bc341c05542cbe0e6d37c23379499eabf30dee8592e9874c20c2f1b2
3
+ size 5368863864
model-00007-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:376ac49cf00ab81688edbe13dd47826c5787ecbca8da54843c35b2732a6e8204
3
+ size 5369471952
model-00008-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:318d820c0b276068f90e3f806fe59e3c3172f4e8bff6f77b5118515eeb674fad
3
+ size 5369471344
model-00009-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6761a778d45c623075ae9931007ed4d33a6fc05aeec11cbf642d69e700f8a2f3
3
+ size 5369738888
model-00010-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20f89480389d93def5c4a23560da6c40db0f165ea9c8beeaa6a1f7b20a720d83
3
+ size 5369472392
model-00011-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca3c2387d19852ce9c11f42b4cbcfe511763ec7892a1638e8068d736224ea094
3
+ size 5369471808
model-00012-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51d7dc536f60978039a02fe9f4e997278c2ca06554fe1caf455727bffc6ec3ac
3
+ size 5368676824
model-00013-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7300c93ab277c426fd42a8b7f81a9b110be12314da0b7ee6285a01a2875a0827
3
+ size 5368942944
model-00014-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab54ddbd2733cf62b71cd5aa49bf69f73a21fce2a5ee5490c851b4702e9e21fa
3
+ size 5367882824
model-00015-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d915392faa3e3b509c626d6c7329a9cc2734edac2e9a3cf145e4b86c2a729410
3
+ size 5368677728
model-00016-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28e767c0959449451639165964420651091901f2feba9be658ab8da2fe2772ef
3
+ size 5367882288
model-00017-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d555a575b7bd3abf44d8067b32b8020103afdead0a7c8c2f5b1c1cdcf9cdb7a
3
+ size 5368546424
model-00018-of-00018.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c6f4cf130124a32a56e9307cb13f0f831fe8b49d8cc6c2d833d3393f8c72433
3
+ size 4417483752
model.safetensors.index.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:384eaf9ab722697a63b258c37aeb43c1f7ae3f4bb640fd4c2ad241f2e1e562a5
3
+ size 13605125
modeling_minimax_m2.py ADDED
@@ -0,0 +1,706 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
2
+ # This file was automatically generated from src/transformers/models/minimax_m2/modular_minimax_m2.py.
3
+ # Do NOT edit this file manually as any edits will be overwritten by the generation of
4
+ # the file from the modular. If any change should be done, please apply the change to the
5
+ # modular_minimax_m2.py file directly. One of our CI enforces this.
6
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
7
+ # coding=utf-8
8
+ # Copyright 2025 the HuggingFace Team. All rights reserved.
9
+ #
10
+ # Licensed under the Apache License, Version 2.0 (the "License");
11
+ # you may not use this file except in compliance with the License.
12
+ # You may obtain a copy of the License at
13
+ #
14
+ # http://www.apache.org/licenses/LICENSE-2.0
15
+ #
16
+ # Unless required by applicable law or agreed to in writing, software
17
+ # distributed under the License is distributed on an "AS IS" BASIS,
18
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
19
+ # See the License for the specific language governing permissions and
20
+ # limitations under the License.
21
+
22
+
23
+ from collections.abc import Callable
24
+ from typing import Optional, Union, Unpack
25
+
26
+ import torch
27
+ from torch import nn
28
+
29
+ from transformers.activations import ACT2FN
30
+ from transformers.cache_utils import Cache, DynamicCache
31
+ from transformers.generation import GenerationMixin
32
+ from transformers.integrations import use_kernel_forward_from_hub
33
+ from transformers.masking_utils import create_causal_mask, create_sliding_window_causal_mask
34
+ from transformers.modeling_flash_attention_utils import FlashAttentionKwargs
35
+ from transformers.modeling_layers import (
36
+ GenericForQuestionAnswering,
37
+ GenericForSequenceClassification,
38
+ GenericForTokenClassification,
39
+ GradientCheckpointingLayer,
40
+ )
41
+ from transformers.modeling_outputs import MoeCausalLMOutputWithPast, MoeModelOutputWithPast
42
+ from transformers.modeling_rope_utils import ROPE_INIT_FUNCTIONS, dynamic_rope_update
43
+ from transformers.modeling_utils import ALL_ATTENTION_FUNCTIONS, PreTrainedModel
44
+ from transformers.utils import TransformersKwargs, auto_docstring, can_return_tuple
45
+ from transformers.utils.deprecation import deprecate_kwarg
46
+ from transformers.utils.generic import OutputRecorder, check_model_inputs
47
+ from .configuration_minimax_m2 import MiniMaxM2Config
48
+
49
+
50
+ class MiniMaxM2MLP(nn.Module):
51
+ def __init__(self, config: MiniMaxM2Config):
52
+ super().__init__()
53
+ self.ffn_dim = config.intermediate_size
54
+ self.hidden_dim = config.hidden_size
55
+
56
+ self.w1 = nn.Linear(self.hidden_dim, self.ffn_dim, bias=False)
57
+ self.w2 = nn.Linear(self.ffn_dim, self.hidden_dim, bias=False)
58
+ self.w3 = nn.Linear(self.hidden_dim, self.ffn_dim, bias=False)
59
+
60
+ self.act_fn = ACT2FN[config.hidden_act]
61
+
62
+ def forward(self, hidden_states):
63
+ current_hidden_states = self.act_fn(self.w1(hidden_states)) * self.w3(hidden_states)
64
+ current_hidden_states = self.w2(current_hidden_states)
65
+ return current_hidden_states
66
+
67
+
68
+ class MiniMaxM2Experts(nn.ModuleList):
69
+ """
70
+ ModuleList of experts.
71
+ """
72
+
73
+ def __init__(self, config: MiniMaxM2Config):
74
+ super().__init__()
75
+ self.top_k = config.num_experts_per_tok
76
+ self.num_experts = config.num_local_experts
77
+ for _ in range(self.num_experts):
78
+ self.append(MiniMaxM2MLP(config))
79
+
80
+ def forward(
81
+ self, hidden_states: torch.Tensor, top_k_index: torch.Tensor, top_k_weights: torch.Tensor
82
+ ) -> torch.Tensor:
83
+ """
84
+ Args:
85
+ hidden_states: (batch_size * sequence_length, hidden_dim)
86
+ selected_experts: (batch_size * sequence_length, top_k)
87
+ routing_weights: (batch_size * sequence_length, top_k)
88
+ Returns:
89
+ (batch_size * sequence_length, hidden_dim)
90
+ """
91
+ final_hidden_states = torch.zeros_like(hidden_states)
92
+ expert_mask = torch.nn.functional.one_hot(top_k_index, num_classes=self.num_experts).permute(2, 1, 0)
93
+
94
+ expert_hit = torch.greater(expert_mask.sum(dim=(-1, -2)), 0).nonzero()
95
+ for expert_idx in expert_hit:
96
+ idx, top_x = torch.where(expert_mask[expert_idx].squeeze(0))
97
+ current_state = hidden_states[None, top_x].reshape(-1, hidden_states.shape[-1])
98
+ current_hidden_states = self[expert_idx](current_state) * top_k_weights[top_x, idx, None]
99
+ final_hidden_states.index_add_(0, top_x, current_hidden_states.to(hidden_states.dtype))
100
+ return final_hidden_states
101
+
102
+
103
+ class MiniMaxM2SparseMoeBlock(nn.Module):
104
+ def __init__(self, config):
105
+ super().__init__()
106
+ self.top_k = config.num_experts_per_tok
107
+ self.jitter_noise = config.router_jitter_noise
108
+ self.gate = nn.Linear(config.hidden_size, config.num_local_experts, bias=False)
109
+ self.experts = MiniMaxM2Experts(config)
110
+ self.register_buffer("e_score_correction_bias", torch.zeros(config.num_local_experts))
111
+
112
+ def route_tokens_to_experts(self, router_logits):
113
+ routing_weights = torch.nn.functional.sigmoid(router_logits.float())
114
+ scores_for_choice = routing_weights + self.e_score_correction_bias
115
+ _, top_k_index = torch.topk(scores_for_choice, self.top_k, dim=-1, sorted=False)
116
+ top_k_weights = routing_weights.gather(1, top_k_index)
117
+ top_k_weights /= top_k_weights.sum(dim=-1, keepdim=True)
118
+ return top_k_index, top_k_weights.to(router_logits.dtype)
119
+
120
+ def forward(self, hidden_states: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]:
121
+ batch_size, sequence_length, hidden_dim = hidden_states.shape
122
+ if self.training and self.jitter_noise > 0:
123
+ hidden_states *= torch.empty_like(hidden_states).uniform_(1.0 - self.jitter_noise, 1.0 + self.jitter_noise)
124
+ hidden_states = hidden_states.view(-1, hidden_states.shape[-1])
125
+ router_logits = self.gate(hidden_states)
126
+ top_k_index, top_k_weights = self.route_tokens_to_experts(router_logits)
127
+ hidden_states = self.experts(hidden_states, top_k_index, top_k_weights.to(hidden_states.dtype))
128
+ hidden_states = hidden_states.reshape(batch_size, sequence_length, hidden_dim)
129
+ return hidden_states, router_logits
130
+
131
+
132
+ @use_kernel_forward_from_hub("RMSNorm")
133
+ class MiniMaxM2RMSNorm(nn.Module):
134
+ def __init__(self, hidden_size, eps=1e-6):
135
+ """
136
+ MiniMaxM2RMSNorm is equivalent to T5LayerNorm
137
+ """
138
+ super().__init__()
139
+ self.weight = nn.Parameter(torch.ones(hidden_size))
140
+ self.variance_epsilon = eps
141
+
142
+ def forward(self, hidden_states):
143
+ input_dtype = hidden_states.dtype
144
+ hidden_states = hidden_states.to(torch.float32)
145
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
146
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
147
+ return self.weight * hidden_states.to(input_dtype)
148
+
149
+ def extra_repr(self):
150
+ return f"{tuple(self.weight.shape)}, eps={self.variance_epsilon}"
151
+
152
+
153
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
154
+ """
155
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
156
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
157
+ """
158
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
159
+ if n_rep == 1:
160
+ return hidden_states
161
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
162
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
163
+
164
+
165
+ def eager_attention_forward(
166
+ module: nn.Module,
167
+ query: torch.Tensor,
168
+ key: torch.Tensor,
169
+ value: torch.Tensor,
170
+ attention_mask: Optional[torch.Tensor],
171
+ scaling: float,
172
+ dropout: float = 0.0,
173
+ **kwargs: Unpack[TransformersKwargs],
174
+ ):
175
+ key_states = repeat_kv(key, module.num_key_value_groups)
176
+ value_states = repeat_kv(value, module.num_key_value_groups)
177
+
178
+ attn_weights = torch.matmul(query, key_states.transpose(2, 3)) * scaling
179
+ if attention_mask is not None:
180
+ causal_mask = attention_mask[:, :, :, : key_states.shape[-2]]
181
+ attn_weights = attn_weights + causal_mask
182
+
183
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype)
184
+ attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training)
185
+ attn_output = torch.matmul(attn_weights, value_states)
186
+ attn_output = attn_output.transpose(1, 2).contiguous()
187
+
188
+ return attn_output, attn_weights
189
+
190
+
191
+ def rotate_half(x):
192
+ """Rotates half the hidden dims of the input."""
193
+ x1 = x[..., : x.shape[-1] // 2]
194
+ x2 = x[..., x.shape[-1] // 2 :]
195
+ return torch.cat((-x2, x1), dim=-1)
196
+
197
+
198
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
199
+ """Applies Rotary Position Embedding to the query and key tensors.
200
+
201
+ Args:
202
+ q (`torch.Tensor`): The query tensor.
203
+ k (`torch.Tensor`): The key tensor.
204
+ cos (`torch.Tensor`): The cosine part of the rotary embedding.
205
+ sin (`torch.Tensor`): The sine part of the rotary embedding.
206
+ position_ids (`torch.Tensor`, *optional*):
207
+ Deprecated and unused.
208
+ unsqueeze_dim (`int`, *optional*, defaults to 1):
209
+ The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
210
+ sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
211
+ that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
212
+ k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
213
+ cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
214
+ the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
215
+ Returns:
216
+ `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
217
+ """
218
+ cos = cos.unsqueeze(unsqueeze_dim)
219
+ sin = sin.unsqueeze(unsqueeze_dim)
220
+
221
+ # Keep half or full tensor for later concatenation
222
+ rotary_dim = cos.shape[-1]
223
+ q_rot, q_pass = q[..., :rotary_dim], q[..., rotary_dim:]
224
+ k_rot, k_pass = k[..., :rotary_dim], k[..., rotary_dim:]
225
+
226
+ # Apply rotary embeddings on the first half or full tensor
227
+ q_embed = (q_rot * cos) + (rotate_half(q_rot) * sin)
228
+ k_embed = (k_rot * cos) + (rotate_half(k_rot) * sin)
229
+
230
+ # Concatenate back to full shape
231
+ q_embed = torch.cat([q_embed, q_pass], dim=-1)
232
+ k_embed = torch.cat([k_embed, k_pass], dim=-1)
233
+ return q_embed, k_embed
234
+
235
+
236
+ class MiniMaxM2Attention(nn.Module):
237
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
238
+
239
+ def __init__(self, config: MiniMaxM2Config, layer_idx: int):
240
+ super().__init__()
241
+ self.config = config
242
+ self.layer_idx = layer_idx
243
+ self.head_dim = getattr(config, "head_dim", None) or config.hidden_size // config.num_attention_heads
244
+ self.num_key_value_groups = config.num_attention_heads // config.num_key_value_heads
245
+ self.scaling = self.head_dim**-0.5
246
+ self.attention_dropout = config.attention_dropout
247
+ self.is_causal = True
248
+ self.q_proj = nn.Linear(config.hidden_size, config.num_attention_heads * self.head_dim, bias=False)
249
+ self.k_proj = nn.Linear(config.hidden_size, config.num_key_value_heads * self.head_dim, bias=False)
250
+ self.v_proj = nn.Linear(config.hidden_size, config.num_key_value_heads * self.head_dim, bias=False)
251
+ self.o_proj = nn.Linear(config.num_attention_heads * self.head_dim, config.hidden_size, bias=False)
252
+
253
+ self.use_qk_norm = config.use_qk_norm
254
+ if self.use_qk_norm:
255
+ self.q_norm = MiniMaxM2RMSNorm(self.head_dim * config.num_attention_heads, eps=config.rms_norm_eps)
256
+ self.k_norm = MiniMaxM2RMSNorm(self.head_dim * config.num_key_value_heads, eps=config.rms_norm_eps)
257
+
258
+ @deprecate_kwarg("past_key_value", new_name="past_key_values", version="4.58")
259
+ def forward(
260
+ self,
261
+ hidden_states: torch.Tensor,
262
+ position_embeddings: tuple[torch.Tensor, torch.Tensor],
263
+ attention_mask: Optional[torch.Tensor],
264
+ past_key_values: Optional[Cache] = None,
265
+ cache_position: Optional[torch.LongTensor] = None,
266
+ **kwargs: Unpack[FlashAttentionKwargs],
267
+ ) -> tuple[torch.Tensor, Optional[torch.Tensor]]:
268
+ input_shape = hidden_states.shape[:-1]
269
+ hidden_shape = (*input_shape, -1, self.head_dim)
270
+
271
+ query_states = self.q_proj(hidden_states)
272
+ key_states = self.k_proj(hidden_states)
273
+ value_states = self.v_proj(hidden_states)
274
+
275
+ if self.use_qk_norm: # main diff from Llama
276
+ query_states = self.q_norm(query_states)
277
+ key_states = self.k_norm(key_states)
278
+
279
+ key_states = key_states.view(hidden_shape)
280
+ query_states = query_states.view(hidden_shape)
281
+ value_states = value_states.view(hidden_shape)
282
+
283
+ query_states = query_states.transpose(1, 2)
284
+ key_states = key_states.transpose(1, 2)
285
+ value_states = value_states.transpose(1, 2)
286
+
287
+ cos, sin = position_embeddings
288
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
289
+
290
+ if past_key_values is not None:
291
+ # sin and cos are specific to RoPE models; position_ids needed for the static cache
292
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
293
+ key_states, value_states = past_key_values.update(key_states, value_states, self.layer_idx, cache_kwargs)
294
+
295
+ attention_interface: Callable = eager_attention_forward
296
+ if self.config._attn_implementation != "eager":
297
+ attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation]
298
+
299
+ attn_output, attn_weights = attention_interface(
300
+ self,
301
+ query_states,
302
+ key_states,
303
+ value_states,
304
+ attention_mask,
305
+ dropout=0.0 if not self.training else self.attention_dropout,
306
+ scaling=self.scaling,
307
+ **kwargs,
308
+ )
309
+
310
+ attn_output = attn_output.reshape(*input_shape, -1).contiguous()
311
+ attn_output = self.o_proj(attn_output)
312
+ return attn_output, attn_weights
313
+
314
+
315
+ class MiniMaxM2DecoderLayer(GradientCheckpointingLayer):
316
+ def __init__(self, config: MiniMaxM2Config, layer_idx: int):
317
+ super().__init__()
318
+ self.hidden_size = config.hidden_size
319
+
320
+ self.self_attn = MiniMaxM2Attention(config, layer_idx)
321
+
322
+ self.block_sparse_moe = MiniMaxM2SparseMoeBlock(config)
323
+ self.input_layernorm = MiniMaxM2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
324
+ self.post_attention_layernorm = MiniMaxM2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
325
+
326
+ @deprecate_kwarg("past_key_value", new_name="past_key_values", version="4.58")
327
+ def forward(
328
+ self,
329
+ hidden_states: torch.Tensor,
330
+ position_embeddings: tuple[torch.Tensor, torch.Tensor],
331
+ attention_mask: Optional[torch.Tensor] = None,
332
+ position_ids: Optional[torch.LongTensor] = None,
333
+ past_key_values: Optional[Cache] = None,
334
+ cache_position: Optional[torch.LongTensor] = None,
335
+ **kwargs: Unpack[TransformersKwargs],
336
+ ) -> torch.FloatTensor:
337
+ residual = hidden_states
338
+
339
+ hidden_states = self.input_layernorm(hidden_states)
340
+
341
+ # Self Attention
342
+ hidden_states, _ = self.self_attn(
343
+ hidden_states=hidden_states,
344
+ position_embeddings=position_embeddings,
345
+ attention_mask=attention_mask,
346
+ position_ids=position_ids,
347
+ past_key_values=past_key_values,
348
+ cache_position=cache_position,
349
+ **kwargs,
350
+ )
351
+ hidden_states = residual + hidden_states
352
+
353
+ # Fully Connected
354
+ residual = hidden_states
355
+ hidden_states = self.post_attention_layernorm(hidden_states)
356
+ hidden_states, _ = self.block_sparse_moe(hidden_states)
357
+ hidden_states = residual + hidden_states
358
+
359
+ return hidden_states
360
+
361
+
362
+ class MiniMaxM2RotaryEmbedding(nn.Module):
363
+ inv_freq: torch.Tensor # fix linting for `register_buffer`
364
+
365
+ def __init__(self, config: MiniMaxM2Config, device=None):
366
+ super().__init__()
367
+ # BC: "rope_type" was originally "type"
368
+ if hasattr(config, "rope_scaling") and isinstance(config.rope_scaling, dict):
369
+ self.rope_type = config.rope_scaling.get("rope_type", config.rope_scaling.get("type"))
370
+ else:
371
+ self.rope_type = "default"
372
+ self.max_seq_len_cached = config.max_position_embeddings
373
+ self.original_max_seq_len = config.max_position_embeddings
374
+
375
+ self.config = config
376
+ self.rope_init_fn = ROPE_INIT_FUNCTIONS[self.rope_type]
377
+
378
+ inv_freq, self.attention_scaling = self.rope_init_fn(self.config, device)
379
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
380
+ self.original_inv_freq = self.inv_freq
381
+
382
+ @torch.no_grad()
383
+ @dynamic_rope_update # power user: used with advanced RoPE types (e.g. dynamic rope)
384
+ def forward(self, x, position_ids):
385
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1).to(x.device)
386
+ position_ids_expanded = position_ids[:, None, :].float()
387
+
388
+ device_type = x.device.type if isinstance(x.device.type, str) and x.device.type != "mps" else "cpu"
389
+ with torch.autocast(device_type=device_type, enabled=False): # Force float32
390
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
391
+ emb = torch.cat((freqs, freqs), dim=-1)
392
+ cos = emb.cos() * self.attention_scaling
393
+ sin = emb.sin() * self.attention_scaling
394
+
395
+ return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
396
+
397
+
398
+ @auto_docstring
399
+ class MiniMaxM2PreTrainedModel(PreTrainedModel):
400
+ config: MiniMaxM2Config
401
+ base_model_prefix = "model"
402
+ supports_gradient_checkpointing = True
403
+ _no_split_modules = ["MiniMaxM2DecoderLayer"]
404
+ _skip_keys_device_placement = ["past_key_values"]
405
+ _supports_flash_attn = True
406
+ _supports_sdpa = True
407
+ _supports_flex_attn = True
408
+ _can_compile_fullgraph = False # MoE models don't work with torch.compile (`torch.where(condition)` not supported)
409
+ _supports_attention_backend = True
410
+ _can_record_outputs = {
411
+ "router_logits": OutputRecorder(MiniMaxM2SparseMoeBlock, index=1),
412
+ "hidden_states": MiniMaxM2DecoderLayer,
413
+ "attentions": MiniMaxM2Attention,
414
+ }
415
+
416
+
417
+ @auto_docstring
418
+ class MiniMaxM2Model(MiniMaxM2PreTrainedModel):
419
+ def __init__(self, config: MiniMaxM2Config):
420
+ super().__init__(config)
421
+ self.padding_idx = config.pad_token_id
422
+ self.vocab_size = config.vocab_size
423
+
424
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
425
+ self.layers = nn.ModuleList(
426
+ [MiniMaxM2DecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
427
+ )
428
+ self.norm = MiniMaxM2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
429
+ self.rotary_emb = MiniMaxM2RotaryEmbedding(config=config)
430
+ self.gradient_checkpointing = False
431
+
432
+ # Initialize weights and apply final processing
433
+ self.post_init()
434
+
435
+ @check_model_inputs
436
+ @auto_docstring
437
+ def forward(
438
+ self,
439
+ input_ids: Optional[torch.LongTensor] = None,
440
+ attention_mask: Optional[torch.Tensor] = None,
441
+ position_ids: Optional[torch.LongTensor] = None,
442
+ past_key_values: Optional[Cache] = None,
443
+ inputs_embeds: Optional[torch.FloatTensor] = None,
444
+ use_cache: Optional[bool] = None,
445
+ cache_position: Optional[torch.LongTensor] = None,
446
+ **kwargs: Unpack[TransformersKwargs],
447
+ ) -> MoeModelOutputWithPast:
448
+ if (input_ids is None) ^ (inputs_embeds is not None):
449
+ raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
450
+
451
+ if use_cache and past_key_values is None:
452
+ past_key_values = DynamicCache(config=self.config)
453
+
454
+ if inputs_embeds is None:
455
+ inputs_embeds = self.embed_tokens(input_ids)
456
+
457
+ if cache_position is None:
458
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
459
+ cache_position = torch.arange(
460
+ past_seen_tokens, past_seen_tokens + inputs_embeds.shape[1], device=inputs_embeds.device
461
+ )
462
+ if position_ids is None:
463
+ position_ids = cache_position.unsqueeze(0)
464
+
465
+ mask_function = create_causal_mask if self.config.sliding_window is None else create_sliding_window_causal_mask
466
+ causal_mask = mask_function(
467
+ config=self.config,
468
+ input_embeds=inputs_embeds,
469
+ attention_mask=attention_mask,
470
+ cache_position=cache_position,
471
+ past_key_values=past_key_values,
472
+ position_ids=position_ids,
473
+ )
474
+
475
+ hidden_states = inputs_embeds
476
+
477
+ # create position embeddings to be shared across the decoder layers
478
+ position_embeddings = self.rotary_emb(hidden_states, position_ids)
479
+
480
+ for decoder_layer in self.layers[: self.config.num_hidden_layers]:
481
+ hidden_states = decoder_layer(
482
+ hidden_states,
483
+ position_embeddings=position_embeddings,
484
+ attention_mask=causal_mask,
485
+ position_ids=position_ids,
486
+ past_key_values=past_key_values,
487
+ use_cache=use_cache,
488
+ cache_position=cache_position,
489
+ **kwargs,
490
+ )
491
+
492
+ hidden_states = self.norm(hidden_states)
493
+
494
+ return MoeModelOutputWithPast( # only diff with Mistral is the output type, we need MoE
495
+ last_hidden_state=hidden_states,
496
+ past_key_values=past_key_values,
497
+ )
498
+
499
+
500
+ def load_balancing_loss_func(
501
+ gate_logits: Union[torch.Tensor, tuple[torch.Tensor], None],
502
+ num_experts: Optional[int] = None,
503
+ top_k=2,
504
+ attention_mask: Optional[torch.Tensor] = None,
505
+ ) -> Union[torch.Tensor, int]:
506
+ r"""
507
+ Computes auxiliary load balancing loss as in Switch Transformer - implemented in Pytorch.
508
+
509
+ See Switch Transformer (https://huggingface.co/papers/2101.03961) for more details. This function implements the loss
510
+ function presented in equations (4) - (6) of the paper. It aims at penalizing cases where the routing between
511
+ experts is too unbalanced.
512
+
513
+ Args:
514
+ gate_logits:
515
+ Logits from the `gate`, should be a tuple of model.config.num_hidden_layers tensors of
516
+ shape [batch_size X sequence_length, num_experts].
517
+ num_experts:
518
+ Number of experts
519
+ top_k:
520
+ The number of experts to route per-token, can be also interpreted as the `top-k` routing
521
+ parameter.
522
+ attention_mask (`torch.Tensor`, *optional*):
523
+ The attention_mask used in forward function
524
+ shape [batch_size X sequence_length] if not None.
525
+
526
+ Returns:
527
+ The auxiliary loss.
528
+ """
529
+ if gate_logits is None or not isinstance(gate_logits, tuple):
530
+ return 0
531
+
532
+ if isinstance(gate_logits, tuple):
533
+ compute_device = gate_logits[0].device
534
+ concatenated_gate_logits = torch.cat([layer_gate.to(compute_device) for layer_gate in gate_logits], dim=0)
535
+
536
+ routing_weights = torch.nn.functional.softmax(concatenated_gate_logits, dim=-1)
537
+
538
+ _, selected_experts = torch.topk(routing_weights, top_k, dim=-1)
539
+
540
+ expert_mask = torch.nn.functional.one_hot(selected_experts, num_experts)
541
+
542
+ if attention_mask is None:
543
+ # Compute the percentage of tokens routed to each experts
544
+ tokens_per_expert = torch.mean(expert_mask.float(), dim=0)
545
+
546
+ # Compute the average probability of routing to these experts
547
+ router_prob_per_expert = torch.mean(routing_weights, dim=0)
548
+ else:
549
+ batch_size, sequence_length = attention_mask.shape
550
+ num_hidden_layers = concatenated_gate_logits.shape[0] // (batch_size * sequence_length)
551
+
552
+ # Compute the mask that masks all padding tokens as 0 with the same shape of expert_mask
553
+ expert_attention_mask = (
554
+ attention_mask[None, :, :, None, None]
555
+ .expand((num_hidden_layers, batch_size, sequence_length, top_k, num_experts))
556
+ .reshape(-1, top_k, num_experts)
557
+ .to(compute_device)
558
+ )
559
+
560
+ # Compute the percentage of tokens routed to each experts
561
+ tokens_per_expert = torch.sum(expert_mask.float() * expert_attention_mask, dim=0) / torch.sum(
562
+ expert_attention_mask, dim=0
563
+ )
564
+
565
+ # Compute the mask that masks all padding tokens as 0 with the same shape of tokens_per_expert
566
+ router_per_expert_attention_mask = (
567
+ attention_mask[None, :, :, None]
568
+ .expand((num_hidden_layers, batch_size, sequence_length, num_experts))
569
+ .reshape(-1, num_experts)
570
+ .to(compute_device)
571
+ )
572
+
573
+ # Compute the average probability of routing to these experts
574
+ router_prob_per_expert = torch.sum(routing_weights * router_per_expert_attention_mask, dim=0) / torch.sum(
575
+ router_per_expert_attention_mask, dim=0
576
+ )
577
+
578
+ overall_loss = torch.sum(tokens_per_expert * router_prob_per_expert.unsqueeze(0))
579
+ return overall_loss * num_experts
580
+
581
+
582
+ @auto_docstring
583
+ class MiniMaxM2ForCausalLM(MiniMaxM2PreTrainedModel, GenerationMixin):
584
+ _tied_weights_keys = ["lm_head.weight"]
585
+ _tp_plan = {"lm_head": "colwise_rep"}
586
+ _pp_plan = {"lm_head": (["hidden_states"], ["logits"])}
587
+
588
+ def __init__(self, config):
589
+ super().__init__(config)
590
+ self.model = MiniMaxM2Model(config)
591
+ self.vocab_size = config.vocab_size
592
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
593
+ self.router_aux_loss_coef = config.router_aux_loss_coef
594
+ self.num_experts = config.num_local_experts
595
+ self.num_experts_per_tok = config.num_experts_per_tok
596
+
597
+ # Initialize weights and apply final processing
598
+ self.post_init()
599
+
600
+ @can_return_tuple
601
+ @auto_docstring
602
+ def forward(
603
+ self,
604
+ input_ids: Optional[torch.LongTensor] = None,
605
+ attention_mask: Optional[torch.Tensor] = None,
606
+ position_ids: Optional[torch.LongTensor] = None,
607
+ past_key_values: Optional[Cache] = None,
608
+ inputs_embeds: Optional[torch.FloatTensor] = None,
609
+ labels: Optional[torch.LongTensor] = None,
610
+ use_cache: Optional[bool] = None,
611
+ output_router_logits: Optional[bool] = None,
612
+ cache_position: Optional[torch.LongTensor] = None,
613
+ logits_to_keep: Union[int, torch.Tensor] = 0,
614
+ **kwargs: Unpack[TransformersKwargs],
615
+ ) -> MoeCausalLMOutputWithPast:
616
+ r"""
617
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
618
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
619
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
620
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
621
+
622
+ Example:
623
+
624
+ ```python
625
+ >>> from transformers import AutoTokenizer, MiniMaxM2ForCausalLM
626
+
627
+ >>> model = MiniMaxM2ForCausalLM.from_pretrained("mistralai/MiniMaxM2-8x7B-v0.1")
628
+ >>> tokenizer = AutoTokenizer.from_pretrained("mistralai/MiniMaxM2-8x7B-v0.1")
629
+
630
+ >>> prompt = "Hey, are you conscious? Can you talk to me?"
631
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
632
+
633
+ >>> # Generate
634
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
635
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
636
+ "Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
637
+ ```"""
638
+
639
+ output_router_logits = (
640
+ output_router_logits if output_router_logits is not None else self.config.output_router_logits
641
+ )
642
+
643
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
644
+ outputs: MoeModelOutputWithPast = self.model(
645
+ input_ids=input_ids,
646
+ attention_mask=attention_mask,
647
+ position_ids=position_ids,
648
+ past_key_values=past_key_values,
649
+ inputs_embeds=inputs_embeds,
650
+ use_cache=use_cache,
651
+ output_router_logits=output_router_logits,
652
+ cache_position=cache_position,
653
+ **kwargs,
654
+ )
655
+
656
+ hidden_states = outputs.last_hidden_state
657
+ # Only compute necessary logits, and do not upcast them to float if we are not computing the loss
658
+ slice_indices = slice(-logits_to_keep, None) if isinstance(logits_to_keep, int) else logits_to_keep
659
+ logits = self.lm_head(hidden_states[:, slice_indices, :])
660
+
661
+ loss = None
662
+ if labels is not None:
663
+ loss = self.loss_function(logits, labels, self.vocab_size, **kwargs)
664
+
665
+ aux_loss = None
666
+ if output_router_logits:
667
+ aux_loss = load_balancing_loss_func(
668
+ outputs.router_logits,
669
+ self.num_experts,
670
+ self.num_experts_per_tok,
671
+ attention_mask,
672
+ )
673
+ if labels is not None:
674
+ loss += self.router_aux_loss_coef * aux_loss.to(loss.device) # make sure to reside in the same device
675
+
676
+ return MoeCausalLMOutputWithPast(
677
+ loss=loss,
678
+ aux_loss=aux_loss,
679
+ logits=logits,
680
+ past_key_values=outputs.past_key_values,
681
+ hidden_states=outputs.hidden_states,
682
+ attentions=outputs.attentions,
683
+ router_logits=outputs.router_logits,
684
+ )
685
+
686
+
687
+ class MiniMaxM2ForSequenceClassification(GenericForSequenceClassification, MiniMaxM2PreTrainedModel):
688
+ pass
689
+
690
+
691
+ class MiniMaxM2ForTokenClassification(GenericForTokenClassification, MiniMaxM2PreTrainedModel):
692
+ pass
693
+
694
+
695
+ class MiniMaxM2ForQuestionAnswering(GenericForQuestionAnswering, MiniMaxM2PreTrainedModel):
696
+ pass
697
+
698
+
699
+ __all__ = [
700
+ "MiniMaxM2ForCausalLM",
701
+ "MiniMaxM2ForQuestionAnswering",
702
+ "MiniMaxM2Model",
703
+ "MiniMaxM2PreTrainedModel",
704
+ "MiniMaxM2ForSequenceClassification",
705
+ "MiniMaxM2ForTokenClassification",
706
+ ]
prune_manifest.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,495 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "200000": {
4
+ "content": "]!p~[",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "200001": {
12
+ "content": "<fim_prefix>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "200002": {
20
+ "content": "<fim_middle>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "200003": {
28
+ "content": "<fim_suffix>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "200004": {
36
+ "content": "<fim_pad>",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "200005": {
44
+ "content": "<reponame>",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ },
51
+ "200006": {
52
+ "content": "<filename>",
53
+ "lstrip": false,
54
+ "normalized": false,
55
+ "rstrip": false,
56
+ "single_word": false,
57
+ "special": true
58
+ },
59
+ "200007": {
60
+ "content": "<gh_stars>",
61
+ "lstrip": false,
62
+ "normalized": false,
63
+ "rstrip": false,
64
+ "single_word": false,
65
+ "special": true
66
+ },
67
+ "200008": {
68
+ "content": "<issue_start>",
69
+ "lstrip": false,
70
+ "normalized": false,
71
+ "rstrip": false,
72
+ "single_word": false,
73
+ "special": true
74
+ },
75
+ "200009": {
76
+ "content": "<issue_comment>",
77
+ "lstrip": false,
78
+ "normalized": false,
79
+ "rstrip": false,
80
+ "single_word": false,
81
+ "special": true
82
+ },
83
+ "200010": {
84
+ "content": "<issue_closed>",
85
+ "lstrip": false,
86
+ "normalized": false,
87
+ "rstrip": false,
88
+ "single_word": false,
89
+ "special": true
90
+ },
91
+ "200011": {
92
+ "content": "<jupyter_start>",
93
+ "lstrip": false,
94
+ "normalized": false,
95
+ "rstrip": false,
96
+ "single_word": false,
97
+ "special": true
98
+ },
99
+ "200012": {
100
+ "content": "<jupyter_text>",
101
+ "lstrip": false,
102
+ "normalized": false,
103
+ "rstrip": false,
104
+ "single_word": false,
105
+ "special": true
106
+ },
107
+ "200013": {
108
+ "content": "<jupyter_code>",
109
+ "lstrip": false,
110
+ "normalized": false,
111
+ "rstrip": false,
112
+ "single_word": false,
113
+ "special": true
114
+ },
115
+ "200014": {
116
+ "content": "<jupyter_output>",
117
+ "lstrip": false,
118
+ "normalized": false,
119
+ "rstrip": false,
120
+ "single_word": false,
121
+ "special": true
122
+ },
123
+ "200015": {
124
+ "content": "<empty_output>",
125
+ "lstrip": false,
126
+ "normalized": false,
127
+ "rstrip": false,
128
+ "single_word": false,
129
+ "special": true
130
+ },
131
+ "200016": {
132
+ "content": "<commit_before>",
133
+ "lstrip": false,
134
+ "normalized": false,
135
+ "rstrip": false,
136
+ "single_word": false,
137
+ "special": true
138
+ },
139
+ "200017": {
140
+ "content": "<commit_msg>",
141
+ "lstrip": false,
142
+ "normalized": false,
143
+ "rstrip": false,
144
+ "single_word": false,
145
+ "special": true
146
+ },
147
+ "200018": {
148
+ "content": "<commit_after>",
149
+ "lstrip": false,
150
+ "normalized": false,
151
+ "rstrip": false,
152
+ "single_word": false,
153
+ "special": true
154
+ },
155
+ "200019": {
156
+ "content": "]~b]",
157
+ "lstrip": false,
158
+ "normalized": false,
159
+ "rstrip": false,
160
+ "single_word": false,
161
+ "special": true
162
+ },
163
+ "200020": {
164
+ "content": "[e~[",
165
+ "lstrip": false,
166
+ "normalized": false,
167
+ "rstrip": false,
168
+ "single_word": false,
169
+ "special": true
170
+ },
171
+ "200021": {
172
+ "content": "]!d~[",
173
+ "lstrip": false,
174
+ "normalized": false,
175
+ "rstrip": false,
176
+ "single_word": false,
177
+ "special": true
178
+ },
179
+ "200022": {
180
+ "content": "<function_call>",
181
+ "lstrip": false,
182
+ "normalized": false,
183
+ "rstrip": false,
184
+ "single_word": false,
185
+ "special": true
186
+ },
187
+ "200023": {
188
+ "content": "<code_interpreter>",
189
+ "lstrip": false,
190
+ "normalized": false,
191
+ "rstrip": false,
192
+ "single_word": false,
193
+ "special": true
194
+ },
195
+ "200024": {
196
+ "content": "]<]speech[>[",
197
+ "lstrip": false,
198
+ "normalized": false,
199
+ "rstrip": false,
200
+ "single_word": false,
201
+ "special": true
202
+ },
203
+ "200025": {
204
+ "content": "]<]image[>[",
205
+ "lstrip": false,
206
+ "normalized": false,
207
+ "rstrip": false,
208
+ "single_word": false,
209
+ "special": true
210
+ },
211
+ "200026": {
212
+ "content": "]<]video[>[",
213
+ "lstrip": false,
214
+ "normalized": false,
215
+ "rstrip": false,
216
+ "single_word": false,
217
+ "special": true
218
+ },
219
+ "200027": {
220
+ "content": "]<]start of speech[>[",
221
+ "lstrip": false,
222
+ "normalized": false,
223
+ "rstrip": false,
224
+ "single_word": false,
225
+ "special": true
226
+ },
227
+ "200028": {
228
+ "content": "]<]end of speech[>[",
229
+ "lstrip": false,
230
+ "normalized": false,
231
+ "rstrip": false,
232
+ "single_word": false,
233
+ "special": true
234
+ },
235
+ "200029": {
236
+ "content": "]<]start of image[>[",
237
+ "lstrip": false,
238
+ "normalized": false,
239
+ "rstrip": false,
240
+ "single_word": false,
241
+ "special": true
242
+ },
243
+ "200030": {
244
+ "content": "]<]end of image[>[",
245
+ "lstrip": false,
246
+ "normalized": false,
247
+ "rstrip": false,
248
+ "single_word": false,
249
+ "special": true
250
+ },
251
+ "200031": {
252
+ "content": "]<]start of video[>[",
253
+ "lstrip": false,
254
+ "normalized": false,
255
+ "rstrip": false,
256
+ "single_word": false,
257
+ "special": true
258
+ },
259
+ "200032": {
260
+ "content": "]<]end of video[>[",
261
+ "lstrip": false,
262
+ "normalized": false,
263
+ "rstrip": false,
264
+ "single_word": false,
265
+ "special": true
266
+ },
267
+ "200033": {
268
+ "content": "]<]vision pad[>[",
269
+ "lstrip": false,
270
+ "normalized": false,
271
+ "rstrip": false,
272
+ "single_word": false,
273
+ "special": true
274
+ },
275
+ "200034": {
276
+ "content": "]~!b[",
277
+ "lstrip": false,
278
+ "normalized": false,
279
+ "rstrip": false,
280
+ "single_word": false,
281
+ "special": true
282
+ },
283
+ "200035": {
284
+ "content": "<jupyter_error>",
285
+ "lstrip": false,
286
+ "normalized": false,
287
+ "rstrip": false,
288
+ "single_word": false,
289
+ "special": true
290
+ },
291
+ "200036": {
292
+ "content": "<add_file>",
293
+ "single_word": false,
294
+ "lstrip": false,
295
+ "rstrip": false,
296
+ "normalized": false,
297
+ "special": true
298
+ },
299
+ "200037": {
300
+ "content": "<delete_file>",
301
+ "lstrip": false,
302
+ "normalized": false,
303
+ "rstrip": false,
304
+ "single_word": false,
305
+ "special": true
306
+ },
307
+ "200038": {
308
+ "content": "<rename_file>",
309
+ "lstrip": false,
310
+ "normalized": false,
311
+ "rstrip": false,
312
+ "single_word": false,
313
+ "special": true
314
+ },
315
+ "200039": {
316
+ "content": "<edit_file>",
317
+ "lstrip": false,
318
+ "normalized": false,
319
+ "rstrip": false,
320
+ "single_word": false,
321
+ "special": true
322
+ },
323
+ "200040": {
324
+ "content": "<commit_message>",
325
+ "lstrip": false,
326
+ "normalized": false,
327
+ "rstrip": false,
328
+ "single_word": false,
329
+ "special": true
330
+ },
331
+ "200041": {
332
+ "content": "<empty_source_file>",
333
+ "lstrip": false,
334
+ "normalized": false,
335
+ "rstrip": false,
336
+ "single_word": false,
337
+ "special": true
338
+ },
339
+ "200042": {
340
+ "content": "<repo_struct>",
341
+ "lstrip": false,
342
+ "normalized": false,
343
+ "rstrip": false,
344
+ "single_word": false,
345
+ "special": true
346
+ },
347
+ "200043": {
348
+ "content": "<code_context>",
349
+ "single_word": false,
350
+ "lstrip": false,
351
+ "rstrip": false,
352
+ "normalized": false,
353
+ "special": true
354
+ },
355
+ "200044": {
356
+ "content": "<file_content>",
357
+ "single_word": false,
358
+ "lstrip": false,
359
+ "rstrip": false,
360
+ "normalized": false,
361
+ "special": true
362
+ },
363
+ "200045": {
364
+ "content": "<source_files>",
365
+ "single_word": false,
366
+ "lstrip": false,
367
+ "rstrip": false,
368
+ "normalized": false,
369
+ "special": true
370
+ },
371
+ "200046": {
372
+ "content": "<pr_start>",
373
+ "single_word": false,
374
+ "lstrip": false,
375
+ "rstrip": false,
376
+ "normalized": false,
377
+ "special": true
378
+ },
379
+ "200047": {
380
+ "content": "<review_comment>",
381
+ "single_word": false,
382
+ "lstrip": false,
383
+ "rstrip": false,
384
+ "normalized": false,
385
+ "special": true
386
+ },
387
+ "200048": {
388
+ "content": "<filepath>",
389
+ "single_word": false,
390
+ "lstrip": false,
391
+ "rstrip": false,
392
+ "normalized": false,
393
+ "special": true
394
+ },
395
+ "200049": {
396
+ "content": "<file_sep>",
397
+ "single_word": false,
398
+ "lstrip": false,
399
+ "rstrip": false,
400
+ "normalized": false,
401
+ "special": true
402
+ },
403
+ "200050": {
404
+ "content": "<think>",
405
+ "single_word": false,
406
+ "lstrip": false,
407
+ "rstrip": false,
408
+ "normalized": false,
409
+ "special": false
410
+ },
411
+ "200051": {
412
+ "content": "</think>",
413
+ "single_word": false,
414
+ "lstrip": false,
415
+ "rstrip": false,
416
+ "normalized": false,
417
+ "special": false
418
+ },
419
+ "200052": {
420
+ "content": "<minimax:tool_call>",
421
+ "single_word": false,
422
+ "lstrip": false,
423
+ "rstrip": false,
424
+ "normalized": false,
425
+ "special": false
426
+ },
427
+ "200053": {
428
+ "content": "</minimax:tool_call>",
429
+ "single_word": false,
430
+ "lstrip": false,
431
+ "rstrip": false,
432
+ "normalized": false,
433
+ "special": false
434
+ }
435
+ },
436
+ "additional_special_tokens": [
437
+ "<code_interpreter>",
438
+ "<commit_after>",
439
+ "<commit_before>",
440
+ "<commit_msg>",
441
+ "<empty_output>",
442
+ "<filename>",
443
+ "<fim_middle>",
444
+ "<fim_pad>",
445
+ "<fim_prefix>",
446
+ "<fim_suffix>",
447
+ "<function_call>",
448
+ "<gh_stars>",
449
+ "]<]speech[>[",
450
+ "]<]image[>[",
451
+ "]<]video[>[",
452
+ "]<]start of speech[>[",
453
+ "]<]end of speech[>[",
454
+ "]<]start of image[>[",
455
+ "]<]end of image[>[",
456
+ "]<]start of video[>[",
457
+ "]<]end of video[>[",
458
+ "]<]vision pad[>[",
459
+ "]~!b[",
460
+ "<issue_closed>",
461
+ "<issue_comment>",
462
+ "<issue_start>",
463
+ "<jupyter_code>",
464
+ "<jupyter_output>",
465
+ "<jupyter_start>",
466
+ "<jupyter_text>",
467
+ "<reponame>",
468
+ "[e~[",
469
+ "]!d~[",
470
+ "]!p~[",
471
+ "]~b]",
472
+ "<jupyter_error>",
473
+ "<add_file>",
474
+ "<delete_file>",
475
+ "<rename_file>",
476
+ "<edit_file>",
477
+ "<commit_message>",
478
+ "<empty_source_file>",
479
+ "<repo_struct>",
480
+ "<code_context>",
481
+ "<file_content>",
482
+ "<source_files>",
483
+ "<pr_start>",
484
+ "<review_comment>",
485
+ "<filepath>",
486
+ "<file_sep>"
487
+ ],
488
+ "add_prefix_space": false,
489
+ "bos_token": "]~!b[",
490
+ "clean_up_tokenization_spaces": false,
491
+ "eos_token": "[e~[",
492
+ "model_max_length": 40960000,
493
+ "tokenizer_class": "GPT2Tokenizer",
494
+ "unk_token": "]!d~["
495
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff