Osaurus-AI commited on
Commit
272dc9f
·
verified ·
1 Parent(s): a9cf1eb

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. LICENSE +28 -0
  2. README.md +135 -0
  3. chat_template.jinja +159 -0
  4. config.json +105 -0
  5. configuration_minimax_m2.py +200 -0
  6. generation_config.json +9 -0
  7. jang_config.json +18 -0
  8. jangq-logo.png +0 -0
  9. jangreap_plan.json +0 -0
  10. merges.txt +0 -0
  11. model-00001-of-00039.safetensors +3 -0
  12. model-00002-of-00039.safetensors +3 -0
  13. model-00003-of-00039.safetensors +3 -0
  14. model-00004-of-00039.safetensors +3 -0
  15. model-00005-of-00039.safetensors +3 -0
  16. model-00006-of-00039.safetensors +3 -0
  17. model-00007-of-00039.safetensors +3 -0
  18. model-00008-of-00039.safetensors +3 -0
  19. model-00009-of-00039.safetensors +3 -0
  20. model-00010-of-00039.safetensors +3 -0
  21. model-00011-of-00039.safetensors +3 -0
  22. model-00012-of-00039.safetensors +3 -0
  23. model-00013-of-00039.safetensors +3 -0
  24. model-00014-of-00039.safetensors +3 -0
  25. model-00015-of-00039.safetensors +3 -0
  26. model-00016-of-00039.safetensors +3 -0
  27. model-00017-of-00039.safetensors +3 -0
  28. model-00018-of-00039.safetensors +3 -0
  29. model-00019-of-00039.safetensors +3 -0
  30. model-00020-of-00039.safetensors +3 -0
  31. model-00021-of-00039.safetensors +3 -0
  32. model-00022-of-00039.safetensors +3 -0
  33. model-00023-of-00039.safetensors +3 -0
  34. model-00024-of-00039.safetensors +3 -0
  35. model-00025-of-00039.safetensors +3 -0
  36. model-00026-of-00039.safetensors +3 -0
  37. model-00027-of-00039.safetensors +3 -0
  38. model-00028-of-00039.safetensors +3 -0
  39. model-00029-of-00039.safetensors +3 -0
  40. model-00030-of-00039.safetensors +3 -0
  41. model-00031-of-00039.safetensors +3 -0
  42. model-00032-of-00039.safetensors +3 -0
  43. model-00033-of-00039.safetensors +3 -0
  44. model-00034-of-00039.safetensors +3 -0
  45. model-00035-of-00039.safetensors +3 -0
  46. model-00036-of-00039.safetensors +3 -0
  47. model-00037-of-00039.safetensors +3 -0
  48. model-00038-of-00039.safetensors +3 -0
  49. model-00039-of-00039.safetensors +3 -0
  50. model.safetensors.index.json +0 -0
LICENSE ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright 2025 MiniMax AI.
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
22
+
23
+ Our only modification is that, if the Software (or any derivative works
24
+ thereof) is used for any of your commercial products or services that have
25
+ more than 100 million monthly active users, or more than 30 million US
26
+ dollars (or equivalent in other currencies) in annual recurring revenue,
27
+ you shall prominently display "MiniMax M2" on the user interface of such
28
+ product or service.
README.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: modified-mit
4
+ tags:
5
+ - moe
6
+ - mixture-of-experts
7
+ - jangtq
8
+ - reap
9
+ - mlx
10
+ - minimax_m2
11
+ pipeline_tag: text-generation
12
+ library_name: mlx
13
+ ---
14
+
15
+ # MiniMax-M2.7-Small-JANGTQ
16
+
17
+ **This is now a ~138B-A10B MoE** (down from MiniMax M2's 230B base) —
18
+ 40% routed-expert prune + 2-bit JANGTQ quantization. Distilled from
19
+ MiniMax M2 via our v3 calibration corpus + REAP saliency observer +
20
+ JANGTQ2 codebook quantization.
21
+
22
+ Runs on Apple Silicon via the JANG toolchain + MLX.
23
+
24
+ ## Pipeline
25
+
26
+ ```
27
+ MiniMax M2 (base)
28
+ ↓ v3 calibration corpus (24% code · 20% agentic · 20% general ·
29
+ ↓ 10% academic_mc · 8% science · 8% CN ·
30
+ ↓ 5% cyber · 3% systems · 2% longctx)
31
+
32
+ REAP saliency observer (62 layers × 256 experts → scoring)
33
+ ↓ MiniMax-M2.7-Small = 40% expert prune (154 of 256 kept per layer)
34
+
35
+ JANGTQ2 quantization
36
+ • 2-bit MXTQ on routed-expert weights (Hadamard-rotated codebook)
37
+ • 8-bit affine on attention + dense MLP + embed + lm_head
38
+ • 16-bit on norms and router weights
39
+ ```
40
+
41
+ ## Specs
42
+
43
+ | | Value |
44
+ |---|---|
45
+ | Parameters | **~138B total, ~10B active per token** |
46
+ | Routed experts kept | 154 of 256 (60%) |
47
+ | Top-k active experts | 8 per token |
48
+ | Layers | 62 |
49
+ | Bundle size | 38 GB |
50
+ | Dtype | bfloat16 activations |
51
+ | Attention | Standard Q/K/V + GQA 6:1, head_dim=128, rope_theta=5M |
52
+ | Context | 196,608 |
53
+
54
+ ## Use
55
+
56
+ ```python
57
+ from jang_tools.load_jangtq import load_jangtq_model
58
+ from mlx_lm import generate
59
+ from mlx_lm.sample_utils import make_sampler
60
+
61
+ model, tokenizer = load_jangtq_model("JANGQ-AI/MiniMax-M2.7-Small-JANGTQ")
62
+
63
+ messages = [{"role": "user", "content": "Write a Python function that…"}]
64
+ prompt = tokenizer.apply_chat_template(
65
+ messages, add_generation_prompt=True, tokenize=False
66
+ )
67
+
68
+ # Interleaved-thinking / always-reasoning. Use MiniMax's
69
+ # official sampling: temp=1.0, top_p=0.95, top_k=40
70
+ out = generate(model, tokenizer, prompt=prompt, max_tokens=4096,
71
+ sampler=make_sampler(temp=1.0, top_p=0.95, top_k=40))
72
+ ```
73
+
74
+ ## Calibration corpus (v3 mix)
75
+
76
+ REAP saliency was computed over a 31,338-sample stratified English+CJK
77
+ mix (~8.6 M tokens), bucketed to approximate typical JANGQ-AI workload:
78
+
79
+ | Bucket | Share | Source datasets |
80
+ |--------|-------|-----------------|
81
+ | Coding (22%) | 7% · 6% · 4% · 3% · 2% | `ise-uiuc/Magicoder-OSS-Instruct-75K` · `nvidia/OpenCodeReasoning` · `m-a-p/CodeFeedback-Filtered-Instruction` · `HuggingFaceH4/CodeAlpaca_20K` · `iamtarun/python_code_instructions_18k_alpaca` |
82
+ | Agentic (19%) | 7% · 5% · 3% · 2% · 2% | `NousResearch/hermes-function-calling-v1` · `glaiveai/glaive-function-calling-v2` · `lilacai/glaive-function-calling-v2-sharegpt` · `THUDM/AgentInstruct` (os) · `princeton-nlp/SWE-bench_oracle` |
83
+ | General (17%) | 7% · 4% · 3% · 3% | `allenai/tulu-3-sft-mixture` · `open-thoughts/OpenThoughts-114k` · `teknium/OpenHermes-2.5` · `HuggingFaceH4/ultrachat_200k` |
84
+ | Academic MC (11%) | 5% · 3% · 1% · 1% · 1% · 0.5% · 0.5% | `cais/mmlu` (all, auxiliary_train) · `TIGER-Lab/MMLU-Pro` · `allenai/ai2_arc` · `allenai/openbookqa` · `allenai/sciq` · `tau/commonsense_qa` · `bigbio/med_qa` |
85
+ | Science (10%) | 4% · 3% · 1.5% · 1.5% | `AI-MO/NuminaMath-CoT` · `ccdv/arxiv-summarization` · `qiaojin/PubMedQA` · `camel-ai/physics` |
86
+ | Chinese (9%) | 4% · 2.5% · 2.5% | `silk-road/alpaca-data-gpt4-chinese` · `wangrui6/Zhihu-KOL` · `YeungNLP/firefly-train-1.1M` |
87
+ | Cybersec (5%) | 3% · 2% | `CyberNative/Code_Vulnerability_Security_DPO` · `Trendyol/cybersecurity-instruction-datasets` |
88
+ | Long-context (3%) | 2% · 1% | `emozilla/pg19` · `ccdv/arxiv-summarization` (longer docs) |
89
+ | Systems (3%) | 1.5% · 1.5% | `b-mc2/sql-create-context` · `cognitivecomputations/dolphin-coder` |
90
+
91
+ Total ≈ 31,338 records ≈ 8.6 M tokens (GPT-4 tokenizer estimate).
92
+
93
+ ## Evaluation
94
+
95
+ ### HumanEval+ (code generation)
96
+
97
+ - **Dataset**: `evalplus/humanevalplus` test split (same 164 prompts as
98
+ original HumanEval but with much harder test cases from EvalPlus).
99
+ - **Protocol**: greedy pass@1 baseline + pass@5 retry on failures.
100
+ - **Sampling for pass@5**: temp=1.0, top_p=0.95, top_k=40 (MiniMax
101
+ official recommended sampling); k=5 samples per failed problem, early
102
+ stop on first passing sample.
103
+ - **Max tokens**: 800 (pass@1), 1200 (pass@5 retry).
104
+ - **Grading**: each candidate executed as a subprocess with 20s timeout;
105
+ passes only if ALL EvalPlus tests pass.
106
+
107
+ | Metric | Score |
108
+ |--------|-------|
109
+ | **pass@1 (greedy)** | **71.95%** (118/164) |
110
+ | **pass@5 (greedy + sampled retry of failures)** | **89.02%** (146/164) |
111
+
112
+ 28 of the 46 greedy failures were recovered via sampling (temp=1.0,
113
+ top_p=0.95, top_k=40); the remaining 18 are a mix of genuine logic
114
+ errors (AssertionError) and prompts where even 1200 tokens ran out
115
+ mid-reasoning (no_code_block).
116
+
117
+ *Eval harness code*: see `jang_tools.kimi_prune.bench_humaneval` (pass@1) and `jang_tools.kimi_prune.bench_humaneval_passk` (pass@k retry on failures) in the [JANG toolchain](https://github.com/jinho-jang/jang).
118
+
119
+ ## Variants
120
+
121
+ | Variant | Prune | Size | HF |
122
+ |---------|-------|------|-----|
123
+ | **MiniMax-M2.7-Small** | 40% | 38 GB | `JANGQ-AI/MiniMax-M2.7-Small-JANGTQ` |
124
+ | MiniMax-M2.7-Med | 25% | ~48 GB | `JANGQ-AI/MiniMax-M2.7-Med-JANGTQ` *(pending)* |
125
+ | MiniMax-M2.7-Large | 10% | ~57 GB | `JANGQ-AI/MiniMax-M2.7-Large-JANGTQ` *(pending)* |
126
+
127
+ ## Credits
128
+
129
+ Base: MiniMax M2.
130
+ Methodology: JANG toolchain — REAP saliency + JANGTQ codebook quantization.
131
+ Release: JANGQ-AI (eric@jangq.ai).
132
+
133
+ ## License
134
+
135
+ Modified MIT — inherited from MiniMax M2.
chat_template.jinja ADDED
@@ -0,0 +1,159 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {# ----------‑‑‑ special token variables ‑‑‑---------- #}
2
+ {%- set toolcall_begin_token = '<minimax:tool_call>' -%}
3
+ {%- set toolcall_end_token = '</minimax:tool_call>' -%}
4
+ {#- Tool Rendering Functions ============================================== -#}
5
+ {%- macro render_tool_namespace(namespace_name, tool_list) -%}
6
+ {%- for tool in tool_list -%}
7
+ <tool>{{ tool.function | tojson(ensure_ascii=False) }}</tool>
8
+ {% endfor -%}
9
+ {%- endmacro -%}
10
+ {%- macro visible_text(content) -%}
11
+ {%- if content is string -%}
12
+ {{ content }}
13
+ {%- elif content is iterable and content is not mapping -%}
14
+ {%- for item in content -%}
15
+ {%- if item is mapping and item.type == 'text' -%}
16
+ {{- item.text }}
17
+ {%- elif item is string -%}
18
+ {{- item }}
19
+ {%- endif -%}
20
+ {%- endfor -%}
21
+ {%- else -%}
22
+ {{- content }}
23
+ {%- endif -%}
24
+ {%- endmacro -%}
25
+ {#- System Message Construction ============================================ -#}
26
+ {%- macro build_system_message(system_message) -%}
27
+ {%- if system_message and system_message.content -%}
28
+ {{- visible_text(system_message.content) }}
29
+ {%- else -%}
30
+ {%- if model_identity is not defined -%}
31
+ {%- set model_identity = "You are a helpful assistant. Your name is MiniMax-M2.7 and is built by MiniMax." -%}
32
+ {%- endif -%}
33
+ {{- model_identity }}
34
+ {%- endif -%}
35
+
36
+ {#- Handle current_date -#}
37
+ {%- if system_message and system_message.current_date -%}
38
+ {{- '\n' ~ 'Current date: ' + system_message.current_date }}
39
+ {%- endif -%}
40
+ {#- Handle current_location -#}
41
+ {%- if system_message and system_message.current_location -%}
42
+ {{- '\n' ~ 'Current location: ' + system_message.current_location }}
43
+ {%- endif -%}
44
+ {%- endmacro -%}
45
+ {#- Main Template Logic ================================================= -#}
46
+ {#- Extract system message (only first message if it's system) -#}
47
+ {%- set system_message = none -%}
48
+ {%- set conversation_messages = messages -%}
49
+ {%- if messages and messages[0].role == "system" -%}
50
+ {%- set system_message = messages[0] -%}
51
+ {%- set conversation_messages = messages[1:] -%}
52
+ {%- endif -%}
53
+ {#- Get the last user message turn, for interleved thinking -#}
54
+ {%- set ns = namespace(last_user_index=-1) %}
55
+ {% for m in conversation_messages %}
56
+ {%- if m.role == 'user' %}
57
+ {% set ns.last_user_index = loop.index0 -%}
58
+ {%- endif %}
59
+ {%- endfor %}
60
+ {#- Render system message -#}
61
+ {{- ']~!b[' ~ ']~b]system' ~ '\n' }}
62
+ {{- build_system_message(system_message) }}
63
+ {#- Render tools if available -#}
64
+ {%- if tools -%}
65
+ {{- '\n\n' ~ '# Tools' ~ '\n' ~ 'You may call one or more tools to assist with the user query.\nHere are the tools available in JSONSchema format:' ~ '\n' }}
66
+ {{- '\n' ~ '<tools>' ~ '\n' }}
67
+ {{- render_tool_namespace("functions", tools) }}
68
+ {{- '</tools>' ~ '\n\n' }}
69
+ {{- 'When making tool calls, use XML format to invoke tools and pass parameters:' ~ '\n' }}
70
+ {{- '\n' ~ toolcall_begin_token }}
71
+ <invoke name="tool-name-1">
72
+ <parameter name="param-key-1">param-value-1</parameter>
73
+ <parameter name="param-key-2">param-value-2</parameter>
74
+ ...
75
+ </invoke>
76
+ {{- '\n' ~ toolcall_end_token }}
77
+ {%- endif -%}
78
+ {{- '[e~[\n' }}
79
+
80
+ {#- Render messages -#}
81
+ {%- set last_tool_call = namespace(name=none) -%}
82
+ {%- for message in conversation_messages -%}
83
+ {%- if message.role == 'assistant' -%}
84
+ {#- Only render reasoning_content if no user message follows -#}
85
+ {{- ']~b]ai' ~ '\n' }}
86
+
87
+ {%- set reasoning_content = '' %}
88
+ {%- set content = visible_text(message.content) %}
89
+ {%- if message.reasoning_content is string %}
90
+ {%- set reasoning_content = message.reasoning_content %}
91
+ {%- else %}
92
+ {%- if '</think>' in content %}
93
+ {%- set reasoning_content = content.split('</think>')[0].strip('\n').split('<think>')[-1].strip('\n') %}
94
+ {%- set content = content.split('</think>')[-1].strip('\n') %}
95
+ {%- endif %}
96
+ {%- endif %}
97
+ {%- if reasoning_content and loop.index0 > ns.last_user_index -%}
98
+ {{- '<think>' ~ '\n' ~ reasoning_content ~ '\n' ~ '</think>' ~ '\n\n' }}
99
+ {%- endif -%}
100
+ {%- if content -%}
101
+ {{- content }}
102
+ {%- endif -%}
103
+ {%- if message.tool_calls -%}
104
+ {{- '\n' ~ toolcall_begin_token ~ '\n' }}
105
+
106
+ {%- for tool_call in message.tool_calls -%}
107
+ {%- if tool_call.function %}
108
+ {%- set tool_call = tool_call.function %}
109
+ {%- endif %}
110
+ {{- '<invoke name="' + tool_call.name + '">' }}
111
+ {% set _args = tool_call.arguments %}
112
+ {%- for k, v in _args.items() %}
113
+ {{- '<parameter name="' + k + '">' }}
114
+ {{- v | tojson(ensure_ascii=False) if v is not string else v }}
115
+ {{- '</parameter>' }}
116
+ {% endfor %}
117
+ {{- '</invoke>' ~ '\n' }}
118
+ {%- endfor -%}
119
+
120
+ {{- toolcall_end_token}}
121
+ {%- set last_tool_call.name = message.tool_calls[-1].name -%}
122
+ {%- else -%}
123
+ {%- set last_tool_call.name = none -%}
124
+ {%- endif -%}
125
+ {{- '[e~[' ~ '\n' }}
126
+
127
+ {%- elif message.role == 'tool' -%}
128
+ {%- if last_tool_call.name is none -%}
129
+ {{- raise_exception("Message has tool role, but there was no previous assistant message with a tool call!") }}
130
+ {%- endif -%}
131
+ {%- if loop.first or (conversation_messages[loop.index0 - 1].role != 'tool') -%}
132
+ {{- ']~b]tool' }}
133
+ {%- endif -%}
134
+ {%- if message.content is string -%}
135
+ {{- '\n<response>' }}
136
+ {{- message.content }}
137
+ {{- '</response>' }}
138
+ {%- else -%}
139
+ {%- for tr in message.content -%}
140
+ {{- '\n<response>' }}
141
+ {{- tr.output if tr.output is defined else (tr.text if tr.type == 'text' and tr.text is defined else tr) }}
142
+ {{- '\n</response>' }}
143
+ {%- endfor -%}
144
+ {%- endif -%}
145
+ {%- if loop.last or (conversation_messages[loop.index0 + 1].role != 'tool') -%}
146
+ {{- '[e~[\n' -}}
147
+ {%- endif -%}
148
+
149
+ {%- elif message.role == 'user' -%}
150
+ {{- ']~b]user' ~ '\n' }}
151
+ {{- visible_text(message.content) }}
152
+ {{- '[e~[' ~ '\n' }}
153
+ {%- endif -%}
154
+ {%- endfor -%}
155
+
156
+ {#- Generation prompt -#}
157
+ {%- if add_generation_prompt -%}
158
+ {{- ']~b]ai' ~ '\n' ~ '<think>' ~ '\n' }}
159
+ {%- endif -%}
config.json ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "MiniMaxM2ForCausalLM"
4
+ ],
5
+ "attn_type_list": [
6
+ 1,
7
+ 1,
8
+ 1,
9
+ 1,
10
+ 1,
11
+ 1,
12
+ 1,
13
+ 1,
14
+ 1,
15
+ 1,
16
+ 1,
17
+ 1,
18
+ 1,
19
+ 1,
20
+ 1,
21
+ 1,
22
+ 1,
23
+ 1,
24
+ 1,
25
+ 1,
26
+ 1,
27
+ 1,
28
+ 1,
29
+ 1,
30
+ 1,
31
+ 1,
32
+ 1,
33
+ 1,
34
+ 1,
35
+ 1,
36
+ 1,
37
+ 1,
38
+ 1,
39
+ 1,
40
+ 1,
41
+ 1,
42
+ 1,
43
+ 1,
44
+ 1,
45
+ 1,
46
+ 1,
47
+ 1,
48
+ 1,
49
+ 1,
50
+ 1,
51
+ 1,
52
+ 1,
53
+ 1,
54
+ 1,
55
+ 1,
56
+ 1,
57
+ 1,
58
+ 1,
59
+ 1,
60
+ 1,
61
+ 1,
62
+ 1,
63
+ 1,
64
+ 1,
65
+ 1,
66
+ 1,
67
+ 1
68
+ ],
69
+ "auto_map": {
70
+ "AutoConfig": "configuration_minimax_m2.MiniMaxM2Config",
71
+ "AutoModelForCausalLM": "modeling_minimax_m2.MiniMaxM2ForCausalLM"
72
+ },
73
+ "dtype": "bfloat16",
74
+ "head_dim": 128,
75
+ "hidden_act": "silu",
76
+ "hidden_size": 3072,
77
+ "intermediate_size": 1536,
78
+ "max_position_embeddings": 196608,
79
+ "model_type": "minimax_m2",
80
+ "mtp_transformer_layers": 0,
81
+ "num_attention_heads": 48,
82
+ "num_experts_per_tok": 8,
83
+ "num_hidden_layers": 62,
84
+ "num_key_value_heads": 8,
85
+ "num_local_experts": 154,
86
+ "num_mtp_modules": 0,
87
+ "qk_norm_type": "per_layer",
88
+ "rms_norm_eps": 1e-06,
89
+ "rope_theta": 5000000,
90
+ "rotary_dim": 64,
91
+ "scoring_func": "sigmoid",
92
+ "shared_intermediate_size": 0,
93
+ "tie_word_embeddings": false,
94
+ "transformers_version": "4.46.1",
95
+ "use_cache": true,
96
+ "use_mtp": false,
97
+ "use_qk_norm": true,
98
+ "use_routing_bias": true,
99
+ "vocab_size": 200064,
100
+ "_name_or_path": "MiniMax-M2.7-Small",
101
+ "quantization": {
102
+ "group_size": 64,
103
+ "bits": 2
104
+ }
105
+ }
configuration_minimax_m2.py ADDED
@@ -0,0 +1,200 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
2
+ # This file was automatically generated from src/transformers/models/minimax_m2/modular_minimax_m2.py.
3
+ # Do NOT edit this file manually as any edits will be overwritten by the generation of
4
+ # the file from the modular. If any change should be done, please apply the change to the
5
+ # modular_minimax_m2.py file directly. One of our CI enforces this.
6
+ # 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
7
+ # coding=utf-8
8
+ # Copyright 2025 the HuggingFace Team. All rights reserved.
9
+ #
10
+ # Licensed under the Apache License, Version 2.0 (the "License");
11
+ # you may not use this file except in compliance with the License.
12
+ # You may obtain a copy of the License at
13
+ #
14
+ # http://www.apache.org/licenses/LICENSE-2.0
15
+ #
16
+ # Unless required by applicable law or agreed to in writing, software
17
+ # distributed under the License is distributed on an "AS IS" BASIS,
18
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
19
+ # See the License for the specific language governing permissions and
20
+ # limitations under the License.
21
+
22
+
23
+ from transformers.configuration_utils import PretrainedConfig
24
+
25
+
26
+ class MiniMaxM2Config(PretrainedConfig):
27
+ r"""
28
+ This is the configuration class to store the configuration of a [`MiniMaxM2Model`]. It is used to instantiate an
29
+ MiniMaxM2 model according to the specified arguments, defining the model architecture. Instantiating a configuration
30
+ with the defaults will yield a similar configuration to that of the MiniMaxM2-7B-v0.1 or MiniMaxM2-7B-Instruct-v0.1.
31
+
32
+ [minimax_m2ai/MiniMaxM2-8x7B](https://huggingface.co/minimax_m2ai/MiniMaxM2-8x7B)
33
+ [minimax_m2ai/MiniMaxM2-7B-Instruct-v0.1](https://huggingface.co/minimax_m2ai/MiniMaxM2-7B-Instruct-v0.1)
34
+
35
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
36
+ documentation from [`PretrainedConfig`] for more information.
37
+
38
+
39
+ Args:
40
+ vocab_size (`int`, *optional*, defaults to 32000):
41
+ Vocabulary size of the MiniMaxM2 model. Defines the number of different tokens that can be represented by the
42
+ `inputs_ids` passed when calling [`MiniMaxM2Model`]
43
+ hidden_size (`int`, *optional*, defaults to 4096):
44
+ Dimension of the hidden representations.
45
+ intermediate_size (`int`, *optional*, defaults to 14336):
46
+ Dimension of the MLP representations.
47
+ num_hidden_layers (`int`, *optional*, defaults to 32):
48
+ Number of hidden layers in the Transformer encoder.
49
+ num_attention_heads (`int`, *optional*, defaults to 32):
50
+ Number of attention heads for each attention layer in the Transformer encoder.
51
+ num_key_value_heads (`int`, *optional*, defaults to 8):
52
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
53
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
54
+ `num_key_value_heads=1` the model will use Multi Query Attention (MQA) otherwise GQA is used. When
55
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
56
+ by meanpooling all the original heads within that group. For more details, check out [this
57
+ paper](https://huggingface.co/papers/2305.13245). If it is not specified, will default to `8`.
58
+ head_dim (`int`, *optional*, defaults to `hidden_size // num_attention_heads`):
59
+ The attention head dimension.
60
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
61
+ The non-linear activation function (function or string) in the decoder.
62
+ max_position_embeddings (`int`, *optional*, defaults to `4096*32`):
63
+ The maximum sequence length that this model might ever be used with. MiniMaxM2's sliding window attention
64
+ allows sequence of up to 4096*32 tokens.
65
+ initializer_range (`float`, *optional*, defaults to 0.02):
66
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
67
+ rms_norm_eps (`float`, *optional*, defaults to 1e-05):
68
+ The epsilon used by the rms normalization layers.
69
+ use_cache (`bool`, *optional*, defaults to `True`):
70
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
71
+ relevant if `config.is_decoder=True`.
72
+ pad_token_id (`int`, *optional*):
73
+ The id of the padding token.
74
+ bos_token_id (`int`, *optional*, defaults to 1):
75
+ The id of the "beginning-of-sequence" token.
76
+ eos_token_id (`int`, *optional*, defaults to 2):
77
+ The id of the "end-of-sequence" token.
78
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
79
+ Whether the model's input and output word embeddings should be tied.
80
+ rope_theta (`float`, *optional*, defaults to 1000000.0):
81
+ The base period of the RoPE embeddings.
82
+ sliding_window (`int`, *optional*):
83
+ Sliding window attention window size. If not specified, will default to `4096`.
84
+ attention_dropout (`float`, *optional*, defaults to 0.0):
85
+ The dropout ratio for the attention probabilities.
86
+ num_experts_per_tok (`int`, *optional*, defaults to 2):
87
+ The number of experts to route per-token, can be also interpreted as the `top-k` routing
88
+ parameter
89
+ num_local_experts (`int`, *optional*, defaults to 8):
90
+ Number of experts per Sparse MLP layer.
91
+ output_router_logits (`bool`, *optional*, defaults to `False`):
92
+ Whether or not the router logits should be returned by the model. Enabling this will also
93
+ allow the model to output the auxiliary loss. See [here]() for more details
94
+ router_aux_loss_coef (`float`, *optional*, defaults to 0.001):
95
+ The aux loss factor for the total loss.
96
+ router_jitter_noise (`float`, *optional*, defaults to 0.0):
97
+ Amount of noise to add to the router.
98
+
99
+ ```python
100
+ >>> from transformers import MiniMaxM2Model, MiniMaxM2Config
101
+
102
+ >>> # Initializing a MiniMaxM2 7B style configuration
103
+ >>> configuration = MiniMaxM2Config()
104
+
105
+ >>> # Initializing a model from the MiniMaxM2 7B style configuration
106
+ >>> model = MiniMaxM2Model(configuration)
107
+
108
+ >>> # Accessing the model configuration
109
+ >>> configuration = model.config
110
+ ```"""
111
+
112
+ model_type = "minimax_m2"
113
+ keys_to_ignore_at_inference = ["past_key_values"]
114
+ base_model_tp_plan = {
115
+ "layers.*.self_attn.q_proj": "colwise",
116
+ "layers.*.self_attn.k_proj": "colwise",
117
+ "layers.*.self_attn.v_proj": "colwise",
118
+ "layers.*.self_attn.o_proj": "rowwise",
119
+ "layers.*.block_sparse_moe.gate": "colwise_rep", # we need to replicate here to correctly route experts
120
+ "layers.*.block_sparse_moe.experts.*.w1": "colwise",
121
+ "layers.*.block_sparse_moe.experts.*.w2": "rowwise",
122
+ "layers.*.block_sparse_moe.experts.*.w3": "colwise",
123
+ }
124
+ base_model_pp_plan = {
125
+ "embed_tokens": (["input_ids"], ["inputs_embeds"]),
126
+ "layers": (["hidden_states", "attention_mask"], ["hidden_states"]),
127
+ "norm": (["hidden_states"], ["hidden_states"]),
128
+ }
129
+
130
+ def __init__(
131
+ self,
132
+ vocab_size=32000,
133
+ hidden_size=4096,
134
+ intermediate_size=14336,
135
+ num_hidden_layers=32,
136
+ num_attention_heads=32,
137
+ num_key_value_heads=8,
138
+ head_dim=None,
139
+ hidden_act="silu",
140
+ max_position_embeddings=4096 * 32,
141
+ initializer_range=0.02,
142
+ rms_norm_eps=1e-5,
143
+ use_cache=True,
144
+ pad_token_id=None,
145
+ bos_token_id=1,
146
+ eos_token_id=2,
147
+ tie_word_embeddings=False,
148
+ rope_theta=1e6,
149
+ sliding_window=None,
150
+ attention_dropout=0.0,
151
+ num_experts_per_tok=2,
152
+ num_local_experts=8,
153
+ output_router_logits=False,
154
+ router_aux_loss_coef=0.001,
155
+ router_jitter_noise=0.0,
156
+ **kwargs,
157
+ ):
158
+ self.vocab_size = vocab_size
159
+ self.max_position_embeddings = max_position_embeddings
160
+ self.hidden_size = hidden_size
161
+ self.intermediate_size = intermediate_size
162
+ self.num_hidden_layers = num_hidden_layers
163
+ self.num_attention_heads = num_attention_heads
164
+ self.sliding_window = sliding_window
165
+
166
+ # for backward compatibility
167
+ if num_key_value_heads is None:
168
+ num_key_value_heads = num_attention_heads
169
+
170
+ self.num_key_value_heads = num_key_value_heads
171
+ self.hidden_act = hidden_act
172
+ self.initializer_range = initializer_range
173
+ self.rms_norm_eps = rms_norm_eps
174
+ self.use_cache = use_cache
175
+ self.rope_theta = rope_theta
176
+ self.attention_dropout = attention_dropout
177
+ self.head_dim = head_dim
178
+
179
+ self.num_experts_per_tok = num_experts_per_tok
180
+ self.num_local_experts = num_local_experts
181
+ self.output_router_logits = output_router_logits
182
+ self.router_aux_loss_coef = router_aux_loss_coef
183
+ self.router_jitter_noise = router_jitter_noise
184
+
185
+ self.use_qk_norm = kwargs.pop("use_qk_norm", False)
186
+ self.rotary_dim = kwargs.pop("rotary_dim", self.head_dim)
187
+ self.partial_rotary_factor = kwargs.pop("partial_rotary_factor", 1)
188
+ if self.head_dim is not None:
189
+ self.partial_rotary_factor = self.rotary_dim / self.head_dim
190
+
191
+ super().__init__(
192
+ pad_token_id=pad_token_id,
193
+ bos_token_id=bos_token_id,
194
+ eos_token_id=eos_token_id,
195
+ tie_word_embeddings=tie_word_embeddings,
196
+ **kwargs,
197
+ )
198
+
199
+
200
+ __all__ = ["MiniMaxM2Config"]
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 200019,
3
+ "do_sample": true,
4
+ "eos_token_id": 200020,
5
+ "temperature": 1.0,
6
+ "top_p": 0.95,
7
+ "top_k": 40,
8
+ "transformers_version": "4.46.1"
9
+ }
jang_config.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "weight_format": "mxtq",
3
+ "profile": "JANGTQ2",
4
+ "mxtq_seed": 42,
5
+ "source_model": "JANGQ-AI/MiniMax-M2.7-Small",
6
+ "source_config": {
7
+ "n_routed_experts": 154,
8
+ "num_hidden_layers": 62
9
+ },
10
+ "mxtq_bits": {
11
+ "routed_expert": 2,
12
+ "attention": 8,
13
+ "dense_mlp": 8,
14
+ "embed_tokens": 8,
15
+ "lm_head": 8,
16
+ "norms_router": 16
17
+ }
18
+ }
jangq-logo.png ADDED
jangreap_plan.json ADDED
The diff for this file is too large to render. See raw diff
 
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model-00001-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d178f4a3b2dbdd1767cb9c86e2542fd3e2f33e18c1c2be68cec94bd1349291d
3
+ size 1000475508
model-00002-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:197f9f1c45042f4725d4bbcd489ddb719c8d6b07ea7813a975268129d451e991
3
+ size 1000973812
model-00003-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c89d2c847b81ced36b81971be1bea6f292750fe9f2eb1dd8c5929fd086e79df
3
+ size 1001374012
model-00004-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:911ca4f1fa42bb836491224643fcdd563c6215d9aad5455d023e6b5f935d4ef7
3
+ size 1000577732
model-00005-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1675164c3f3a47097a7f8d6819e4f706fa8ee56eaa3ed5cfbf2379b7ce45cad
3
+ size 1001370964
model-00006-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b59facabba205773fd599b2509f77690ce07252d33228fbae71190e271a89ac
3
+ size 1000976660
model-00007-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9df94abb9fcf991fbf88cb71efba35c6b69f17322fc5947b7d550fe008c6ad6d
3
+ size 1001371312
model-00008-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f42087ca4b0776fa4366ce1c2477ccab1883ea56f18b11870347f60322fff839
3
+ size 1000979696
model-00009-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3728e00331a41663642bd4ac6f59d94ec591fe901c643a6c570cdcbb8eb83451
3
+ size 1003117616
model-00010-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:70c588e57ac2333433ed604871281529f05053449d7acb0403a2782e25021e39
3
+ size 1000418572
model-00011-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89172f26bbc36a18e637689142ff6cb2becaedecd2e6a9d44ccff9a1350e477a
3
+ size 1000580048
model-00012-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99dfc2bd4f933e1bbfaa0f70ab5436dcbb77d7e95725313bc7ad5b3855c883f4
3
+ size 1001375824
model-00013-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc17f03e5a201723ffb6e936ddd32c10ca13a98014f2bfe41b4dac73d7672fe6
3
+ size 1000976624
model-00014-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20b11d3810318db9192aa9691474176639eef9d90c59da5466a8f0eb41d5277f
3
+ size 1001376372
model-00015-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef0fb32ea37548e5d9fe02776e499ee310b8fbed77bb97577a62a8ffc6592b71
3
+ size 1000975788
model-00016-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1cfe9dba2d1aa6805cd4ea046e16e37a6b01164ed17125561d37a97d6c809a6
3
+ size 1001376256
model-00017-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:70d8545b02b500752b21e6b738ffdc02f9df96c5ca48d246fe015e9a40edce33
3
+ size 1000976608
model-00018-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b90ca4ed76f6c46a4ed0c5329a3f66a534e0a59973e133b8cd52f5c1937c38d
3
+ size 1000976472
model-00019-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38c4991f64bbf99f93f1f5844b2b3ee1b9687198601b1150fae52682af4e47ca
3
+ size 1000979664
model-00020-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c0499737a23ba6d3704bb3b156267736527b807bbbe20fc1326e188cd123023
3
+ size 1001373468
model-00021-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f3681e8f880cd3733c78b76dd809b14f524c64dc838b6298490454ed3697a75
3
+ size 1000978804
model-00022-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37f0130fe380799f7a12bd012eb681992a598a67f48ee34129c3388ee11cf2a1
3
+ size 1001373628
model-00023-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:942902a36ca7253c33154bb90b85123ab49df35cfd97e598b6c73c8fe803bd14
3
+ size 1000979540
model-00024-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85e871f30f14475ae9adf8ba402c63e29c3c68918b928a60ec52c87adb417b04
3
+ size 1000976216
model-00025-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:063893268ec4d98baa68ecc063c817d6d6e983eac4dabc57332ca8877fefef27
3
+ size 1000976640
model-00026-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9bfec3d53cda61635d1b8d3e708d572a5f93070a21b62f4e80dbf0a7d2bd2d7
3
+ size 1000979576
model-00027-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2724a544a4be91e47ef7ca5ad758bf4b530beaadc68cb4b5efb0f70b349bcf6d
3
+ size 1001372440
model-00028-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:efcccce686618136a5edd55459d10e7207c5649a3bf8ab15801112844823a17a
3
+ size 1001376620
model-00029-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f98ffec74bc963d2b1aafbd29f5fb99a88690f581f952e199ba2fccbe88f0956
3
+ size 1000976524
model-00030-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:485cd7c87d7ea58b4f2dffdb5106ce861e2820676a91da23e88619950c3629e0
3
+ size 1000979376
model-00031-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:458976ab59c41f982c6250276eaebd67bbbe1293d873be36150ac44b5c605c11
3
+ size 1003117316
model-00032-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:170c64ee9c94d564cb6e0f058b5289b71d927892b9828d50a4012bb2fcbcccfb
3
+ size 1001205636
model-00033-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1cfc2b60045350a64f33edf7937ac3c7f482250463afbc492fe0b75de72b977c
3
+ size 1001376252
model-00034-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab9d0727ddb08b120d0c439aa82d074ad01b8563e62588ae8f24f50befe0d6e2
3
+ size 1000976052
model-00035-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fede48feeaf95e68556e216412238b98d60b83444ea875ea8cde853267d183c5
3
+ size 1001376660
model-00036-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39b719a912b1ac5e76c28285d95855f365c2ffe8030c1af3eee65c096bb27cc8
3
+ size 1000976036
model-00037-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:208b5e567b3c2cf90bdc0dc3bd1894720b925e00e1c2fab82fa03d2374b16c82
3
+ size 1001376500
model-00038-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59b176479503f78ffdfd763d58ccf92bffd0f9f9f17057d6d4c65c38ac9db709
3
+ size 1000869364
model-00039-of-00039.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03b9972e893e1b7f8909064a641121ada4015dcdf5db0952745d27cd0432c67d
3
+ size 388401988
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff