r3lax lordx64 commited on
Commit
d75b1d2
·
0 Parent(s):

Duplicate from lordx64/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled

Browse files
.gitattributes ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
+ base_model: Qwen/Qwen3.6-35B-A3B
8
+ datasets:
9
+ - lordx64/reasoning-distill-opus-4-7-max-sft
10
+ tags:
11
+ - text-generation
12
+ - reasoning
13
+ - distillation
14
+ - chain-of-thought
15
+ - qwen
16
+ - qwen3.6
17
+ - mixture-of-experts
18
+ - moe
19
+ - lora
20
+ - unsloth
21
+ model-index:
22
+ - name: Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled
23
+ results: []
24
+ ---
25
+
26
+ # Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled
27
+
28
+ A reasoning-distilled variant of **Qwen3.6-35B-A3B** taught to imitate the chain-of-thought style of **Claude Opus 4.7**, the frontier reasoning model from Anthropic. The goal: port Claude-grade reasoning behavior into a permissively-licensed Mixture-of-Experts model that an individual can actually run.
29
+
30
+ ## Why this model
31
+
32
+ - **Claude-style reasoning, open weights.** Claude Opus 4.7 is one of the strongest reasoning models available, but only via a proprietary API. This model has been fine-tuned on ~8k high-quality reasoning traces produced by Opus 4.7, teaching the base to *think* before answering — with explicit `<think>…</think>` blocks — in Claude's structure and cadence.
33
+ - **Sparse activation, dense knowledge.** The base is a 35B-parameter MoE with **256 experts, 8 routed + 1 shared**, of which only about **3B parameters are active** per token. You get the capacity of a 35B model at the inference cost of a small dense model. Full-quality bf16 inference runs on a single 80GB A100 or H100.
34
+ - **Long thinking supported.** 64k token context. The model routinely emits 5–30k tokens of `<think>` reasoning on hard problems before giving the final answer — which is the whole point of reasoning models, and why this one was specifically trained end-to-end with an upstream teacher that also reasons explicitly.
35
+ - **Clean base to build on.** LoRA adapter is also published separately (`…-adapter`), so you can apply the distillation to other checkpoints of the same base, or stack further fine-tunes.
36
+
37
+ ## Intended use
38
+
39
+ Built for hard reasoning: graduate-level STEM, competition math (AIME / MATH), code reasoning with explicit walk-through, multi-step logic puzzles, and agentic planning where explicit `<think>` helps correctness.
40
+
41
+ For short-turn conversational latency-sensitive workloads the thinking budget can be large; cap `max_new_tokens` or post-process to strip `<think>…</think>` blocks if you only want final answers in production.
42
+
43
+ ## How to use
44
+
45
+ ```python
46
+ from transformers import AutoModelForCausalLM, AutoTokenizer
47
+ import torch
48
+
49
+ repo = "lordx64/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled"
50
+ tok = AutoTokenizer.from_pretrained(repo)
51
+ model = AutoModelForCausalLM.from_pretrained(
52
+ repo, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True,
53
+ )
54
+
55
+ messages = [{"role": "user", "content": "How many positive integers less than 1000 have digits that sum to 20?"}]
56
+ inputs = tok.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
57
+ out = model.generate(inputs, max_new_tokens=32768, do_sample=False)
58
+ print(tok.decode(out[0][inputs.shape[-1]:], skip_special_tokens=True))
59
+ ```
60
+
61
+ Recommended backend: **vLLM** for serving — the MoE routing + KV cache benefit significantly from continuous batching.
62
+ ```
63
+ vllm serve lordx64/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled \
64
+ --dtype bfloat16 --max-model-len 65536 --gpu-memory-utilization 0.9
65
+ ```
66
+
67
+ ### GGUF (LM Studio / llama.cpp)
68
+
69
+ Quantized GGUF weights are available for `llama.cpp` and LM Studio:
70
+
71
+ - [**IQ4_XS** (18.9 GB)](https://huggingface.co/lordx64/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled-IQ4_XS-GGUF) — fits in ~24 GB RAM/VRAM, default pick for LM Studio
72
+
73
+ Search `lordx64/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled` inside LM Studio's model browser once HF has indexed the GGUF repo (usually within an hour of publication). More quant levels (`Q4_K_M`, `Q5_K_M`, `Q8_0`) can be added on request.
74
+
75
+ ## Training
76
+
77
+ | | |
78
+ |---|---|
79
+ | Base model | `Qwen/Qwen3.6-35B-A3B` (loaded via `unsloth/Qwen3.6-35B-A3B` for faster finetuning) |
80
+ | Teacher | Claude Opus 4.7 (Anthropic) |
81
+ | Training dataset | [`lordx64/reasoning-distill-opus-4-7-max-sft`](https://huggingface.co/datasets/lordx64/reasoning-distill-opus-4-7-max-sft) — reasoning traces from Claude Opus 4.7 reformatted into SFT conversations |
82
+ | Source dataset | [`lordx64/reasoning-distill-claude-opus-4-7-max`](https://huggingface.co/datasets/lordx64/reasoning-distill-claude-opus-4-7-max) — raw teacher traces (pre-SFT formatting) |
83
+ | Dataset size | ~7,800 full conversations, assistant side trained including `<think>…</think>` |
84
+ | Method | SFT with Unsloth + TRL `SFTTrainer` + `train_on_responses_only` (loss only on assistant tokens) |
85
+ | LoRA config | `r=16, alpha=16, dropout=0.0, targets=["q_proj","k_proj","v_proj","o_proj"]` (attention-only) |
86
+ | Hyperparameters | `lr=2e-5`, cosine schedule, `warmup_ratio=0.03`, `weight_decay=0.01`, optimizer `adamw_8bit` |
87
+ | Batch | `per_device=1, grad_accum=16, effective=16`, 2 epochs = 978 steps |
88
+ | Sequence | 4096 tokens during training (64k usable at inference — base supports it natively) |
89
+ | Precision | bf16 on 1× H200 141GB (HF Inference Endpoint, custom container) |
90
+ | Trainable | 3.44M params out of 35.1B (0.01%) |
91
+
92
+ ### Why attention-only LoRA on a MoE
93
+
94
+ The initial plan was full LoRA including the MoE expert FFNs (`gate_proj/up_proj/down_proj`). In the course of this project I filed and upstreamed a shape-mismatch fix to unsloth-zoo's MoE+LoRA grouped-mm path — [unslothai/unsloth-zoo#601](https://github.com/unslothai/unsloth-zoo/pull/601) — without which the expert-LoRA forward crashes on Qwen3.6's 256-expert layout. Even with that fix, single-GPU memory made expert-LoRA impractical for this run. Attention-only captures most of the signal on *style* distillation anyway (the point of this model) while leaving the expert FFNs' learned knowledge intact — a v2 training run with expert LoRA on multi-GPU is a natural next step if the style-only signal isn't enough.
95
+
96
+ ## Evaluation
97
+
98
+ Evaluated via `lm-evaluation-harness` (v0.4.9) with vLLM backend at 64k context, bf16. Custom eval path strips `<think>…</think>` from generations before the filter pipeline, uses per-task conventional fewshot counts, and runs with `fewshot_as_multiturn=True` so few-shot examples are proper chat turns rather than concatenated prompt text. Raw results JSON is public: [lordx64/qwen3-6-distill-evals](https://huggingface.co/datasets/lordx64/qwen3-6-distill-evals).
99
+
100
+ | Benchmark | Setup | Score |
101
+ |---|---|---|
102
+ | **GSM8K CoT** | 8-shot multiturn, limit 300 | **84.3%** (flexible-extract) / 76.7% (strict-match) |
103
+ | **MMLU-Pro** | 5-shot multiturn, limit 500 | **74.9%** |
104
+ | AIME 2024 | 0-shot, full (30) | _extraction fix in progress — model generates answers but not in a format the AIME extractor recognizes (`\boxed{}` vs plain prose)_ |
105
+ | AIME 2025 | 0-shot, full (30) | _same — pending_ |
106
+ | GPQA Diamond | 0-shot CoT, full (198) | _same — pending_ |
107
+ | MATH-500 | 0-shot, limit 100 | _rerun pending (missing `sympy` / `math_verify` dep in the first run)_ |
108
+
109
+ ### MMLU-Pro subject breakdown
110
+
111
+ Standard reasoning-model profile: strong on STEM, weaker on law/engineering. All subjects evaluated at limit 500, 5-shot multiturn.
112
+
113
+ | Subject | Acc | Subject | Acc |
114
+ |---|---:|---|---:|
115
+ | Biology | 86.0% | Chemistry | 78.8% |
116
+ | Psychology | 83.4% | Health | 73.8% |
117
+ | Math | 83.6% | Business | 74.4% |
118
+ | Economics | 83.0% | Other | 72.6% |
119
+ | Physics | 81.0% | Philosophy | 71.3% |
120
+ | Computer Science | 79.0% | History | 70.9% |
121
+ | | | **Engineering** | **54.8%** |
122
+ | | | **Law** | **55.6%** |
123
+
124
+ Full per-task JSON with stderr, filter configs, and timings lives in the [evals dataset](https://huggingface.co/datasets/lordx64/qwen3-6-distill-evals/tree/main/reasoning/lordx64__Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled). The remaining tasks will be added to this table after a diagnostic rerun identifies why AIME/GPQA extraction is returning no-match on generated outputs.
125
+
126
+ ## Limitations
127
+
128
+ - **Reasoning ≠ knowledge.** Distillation transfers *how to reason*, not new facts. Anything the base Qwen3.6-35B-A3B doesn't already know, this model still doesn't know.
129
+ - **Attention-only LoRA.** Expert FFNs are untouched from the base — domains where Claude and Qwen3.6 diverge in factual priors may see uneven improvement.
130
+ - **Long generations.** The model will genuinely use tens of thousands of tokens on hard problems. Budget your `max_new_tokens` accordingly, and provide `max_model_len ≥ 32k` at inference.
131
+ - **Distillation provenance.** Training data was generated with Anthropic's Claude Opus 4.7 via API. Downstream users should confirm compliance with Anthropic's [usage policies](https://www.anthropic.com/legal/usage-policy) for their specific use case.
132
+
133
+ ## Citation
134
+
135
+ If you use this model, please cite the base and the distillation:
136
+
137
+ ```bibtex
138
+ @misc{qwen36_a3b_2026,
139
+ title = {Qwen3.6-35B-A3B},
140
+ author = {Qwen Team},
141
+ year = {2026},
142
+ howpublished = {\url{https://huggingface.co/Qwen/Qwen3.6-35B-A3B}},
143
+ }
144
+
145
+ @misc{lordx64_qwen36_distill_2026,
146
+ title = {Qwen3.6-35B-A3B distilled from Claude Opus 4.7 reasoning},
147
+ author = {lordx64},
148
+ year = {2026},
149
+ howpublished = {\url{https://huggingface.co/lordx64/Qwen3.6-35B-A3B-Claude-4.7-Opus-Reasoning-Distilled}},
150
+ }
151
+ ```
152
+
153
+ ## Acknowledgements
154
+
155
+ - **Unsloth** — 2× faster training of large MoE LoRA; the bug we hit and fixed was in their `unsloth-zoo` patches (credit for rapid review of PR #601).
156
+ - **Anthropic** — for the teacher model.
157
+ - **Qwen team** — for releasing Qwen3.6 with a permissive Apache-2.0 license, enabling work like this.
158
+ - **lm-evaluation-harness (EleutherAI)** — evaluation methodology.
chat_template.jinja ADDED
@@ -0,0 +1,158 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- set image_count = namespace(value=0) %}
2
+ {%- set video_count = namespace(value=0) %}
3
+ {%- macro render_content(content, do_vision_count, is_system_content=false) %}
4
+ {%- if content is string %}
5
+ {{- content }}
6
+ {%- elif content is iterable and content is not mapping %}
7
+ {%- for item in content %}
8
+ {%- if 'image' in item or 'image_url' in item or item.type == 'image' %}
9
+ {%- if is_system_content %}
10
+ {{- raise_exception('System message cannot contain images.') }}
11
+ {%- endif %}
12
+ {%- if do_vision_count %}
13
+ {%- set image_count.value = image_count.value + 1 %}
14
+ {%- endif %}
15
+ {%- if add_vision_id %}
16
+ {{- 'Picture ' ~ image_count.value ~ ': ' }}
17
+ {%- endif %}
18
+ {{- '<|vision_start|><|image_pad|><|vision_end|>' }}
19
+ {%- elif 'video' in item or item.type == 'video' %}
20
+ {%- if is_system_content %}
21
+ {{- raise_exception('System message cannot contain videos.') }}
22
+ {%- endif %}
23
+ {%- if do_vision_count %}
24
+ {%- set video_count.value = video_count.value + 1 %}
25
+ {%- endif %}
26
+ {%- if add_vision_id %}
27
+ {{- 'Video ' ~ video_count.value ~ ': ' }}
28
+ {%- endif %}
29
+ {{- '<|vision_start|><|video_pad|><|vision_end|>' }}
30
+ {%- elif 'text' in item %}
31
+ {{- item.text }}
32
+ {%- else %}
33
+ {{- raise_exception('Unexpected item type in content.') }}
34
+ {%- endif %}
35
+ {%- endfor %}
36
+ {%- elif content is none or content is undefined %}
37
+ {{- '' }}
38
+ {%- else %}
39
+ {{- raise_exception('Unexpected content type.') }}
40
+ {%- endif %}
41
+ {%- endmacro %}
42
+ {%- if not messages %}
43
+ {{- raise_exception('No messages provided.') }}
44
+ {%- endif %}
45
+ {%- set num_sys = 0 %}
46
+ {%- set merged_system = '' %}
47
+ {%- if messages[0].role == 'system' or messages[0].role == 'developer' %}
48
+ {%- set first = render_content(messages[0].content, false, true)|trim %}
49
+ {%- if messages|length > 1 and (messages[1].role == 'system' or messages[1].role == 'developer') %}
50
+ {%- set second = render_content(messages[1].content, false, true)|trim %}
51
+ {%- set merged_system = first + '\n' + second %}
52
+ {%- set num_sys = 2 %}
53
+ {%- else %}
54
+ {%- set merged_system = first %}
55
+ {%- set num_sys = 1 %}
56
+ {%- endif %}
57
+ {%- endif %}
58
+ {%- if tools and tools is iterable and tools is not mapping %}
59
+ {{- '<|im_start|>system\n' }}
60
+ {{- "# Tools\n\nYou have access to the following functions:\n\n<tools>" }}
61
+ {%- for tool in tools %}
62
+ {{- "\n" }}
63
+ {{- tool | tojson }}
64
+ {%- endfor %}
65
+ {{- "\n</tools>" }}
66
+ {{- '\n\nIf you choose to call a function ONLY reply in the following format with NO suffix:\n\n<tool_call>\n<function=example_function_name>\n<parameter=example_parameter_1>\nvalue_1\n</parameter>\n<parameter=example_parameter_2>\nThis is the value for the second parameter\nthat can span\nmultiple lines\n</parameter>\n</function>\n</tool_call>\n\n<IMPORTANT>\nReminder:\n- Function calls MUST follow the specified format: an inner <function=...></function> block must be nested within <tool_call></tool_call> XML tags\n- Required parameters MUST be specified\n- You may provide optional reasoning for your function call in natural language BEFORE the function call, but NOT after\n- If there is no function call available, answer the question like normal with your current knowledge and do not tell the user about function calls\n</IMPORTANT>' }}
67
+ {%- if merged_system %}
68
+ {{- '\n\n' + merged_system }}
69
+ {%- endif %}
70
+ {{- '<|im_end|>\n' }}
71
+ {%- else %}
72
+ {%- if merged_system %}
73
+ {{- '<|im_start|>system\n' + merged_system + '<|im_end|>\n' }}
74
+ {%- endif %}
75
+ {%- endif %}
76
+ {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
77
+ {%- for message in messages[::-1] %}
78
+ {%- set index = (messages|length - 1) - loop.index0 %}
79
+ {%- if ns.multi_step_tool and message.role == "user" %}
80
+ {%- set content = render_content(message.content, false)|trim %}
81
+ {%- if not(content.startswith('<tool_response>') and content.endswith('</tool_response>')) %}
82
+ {%- set ns.multi_step_tool = false %}
83
+ {%- set ns.last_query_index = index %}
84
+ {%- endif %}
85
+ {%- endif %}
86
+ {%- endfor %}
87
+ {%- for message in messages %}
88
+ {%- if loop.index0 >= num_sys and message.role != "system" and message.role != "developer" %}
89
+ {%- set content = render_content(message.content, true)|trim %}
90
+ {%- if message.role == "user" %}
91
+ {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
92
+ {%- elif message.role == "assistant" %}
93
+ {%- set reasoning_content = '' %}
94
+ {%- if message.reasoning_content is string %}
95
+ {%- set reasoning_content = message.reasoning_content %}
96
+ {%- else %}
97
+ {%- if '</think>' in content %}
98
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
99
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
100
+ {%- endif %}
101
+ {%- endif %}
102
+ {%- set reasoning_content = reasoning_content|trim %}
103
+ {%- if (preserve_thinking is defined and preserve_thinking is true) or (loop.index0 > ns.last_query_index) %}
104
+ {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content + '\n</think>\n\n' + content }}
105
+ {%- else %}
106
+ {{- '<|im_start|>' + message.role + '\n' + content }}
107
+ {%- endif %}
108
+ {%- if message.tool_calls and message.tool_calls is iterable and message.tool_calls is not mapping %}
109
+ {%- for tool_call in message.tool_calls %}
110
+ {%- if tool_call.function is defined %}
111
+ {%- set tool_call = tool_call.function %}
112
+ {%- endif %}
113
+ {%- if loop.first %}
114
+ {%- if content|trim %}
115
+ {{- '\n\n<tool_call>\n<function=' + tool_call.name + '>\n' }}
116
+ {%- else %}
117
+ {{- '<tool_call>\n<function=' + tool_call.name + '>\n' }}
118
+ {%- endif %}
119
+ {%- else %}
120
+ {{- '\n<tool_call>\n<function=' + tool_call.name + '>\n' }}
121
+ {%- endif %}
122
+ {%- if tool_call.arguments is mapping %}
123
+ {%- for args_name in tool_call.arguments %}
124
+ {%- set args_value = tool_call.arguments[args_name] %}
125
+ {{- '<parameter=' + args_name + '>\n' }}
126
+ {%- set args_value = args_value | tojson | safe if args_value is mapping or (args_value is sequence and args_value is not string) else args_value | string %}
127
+ {{- args_value }}
128
+ {{- '\n</parameter>\n' }}
129
+ {%- endfor %}
130
+ {%- endif %}
131
+ {{- '</function>\n</tool_call>' }}
132
+ {%- endfor %}
133
+ {%- endif %}
134
+ {{- '<|im_end|>\n' }}
135
+ {%- elif message.role == "tool" %}
136
+ {%- if loop.previtem and loop.previtem.role != "tool" %}
137
+ {{- '<|im_start|>user' }}
138
+ {%- endif %}
139
+ {{- '\n<tool_response>\n' }}
140
+ {{- content }}
141
+ {{- '\n</tool_response>' }}
142
+ {%- if not loop.last and loop.nextitem.role != "tool" %}
143
+ {{- '<|im_end|>\n' }}
144
+ {%- elif loop.last %}
145
+ {{- '<|im_end|>\n' }}
146
+ {%- endif %}
147
+ {%- endif %}
148
+ {%- endif %}
149
+ {%- endfor %}
150
+ {%- if add_generation_prompt %}
151
+ {{- '<|im_start|>assistant\n' }}
152
+ {%- if enable_thinking is defined and enable_thinking is false %}
153
+ {{- '<think>\n\n</think>\n\n' }}
154
+ {%- else %}
155
+ {{- '<think>\n' }}
156
+ {%- endif %}
157
+ {%- endif %}
158
+ {#- Unsloth fixes - developer role, tool calling #}
config.json ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "Qwen3_5MoeForConditionalGeneration"
4
+ ],
5
+ "bos_token_id": null,
6
+ "torch_dtype": "bfloat16",
7
+ "eos_token_id": 248046,
8
+ "image_token_id": 248056,
9
+ "model_name": "unsloth/Qwen3.6-35B-A3B",
10
+ "model_type": "qwen3_5_moe",
11
+ "pad_token_id": 248055,
12
+ "text_config": {
13
+ "attention_bias": false,
14
+ "attention_dropout": 0.0,
15
+ "attn_output_gate": true,
16
+ "bos_token_id": 248044,
17
+ "torch_dtype": "bfloat16",
18
+ "eos_token_id": 248044,
19
+ "full_attention_interval": 4,
20
+ "head_dim": 256,
21
+ "hidden_act": "silu",
22
+ "hidden_size": 2048,
23
+ "initializer_range": 0.02,
24
+ "layer_types": [
25
+ "linear_attention",
26
+ "linear_attention",
27
+ "linear_attention",
28
+ "full_attention",
29
+ "linear_attention",
30
+ "linear_attention",
31
+ "linear_attention",
32
+ "full_attention",
33
+ "linear_attention",
34
+ "linear_attention",
35
+ "linear_attention",
36
+ "full_attention",
37
+ "linear_attention",
38
+ "linear_attention",
39
+ "linear_attention",
40
+ "full_attention",
41
+ "linear_attention",
42
+ "linear_attention",
43
+ "linear_attention",
44
+ "full_attention",
45
+ "linear_attention",
46
+ "linear_attention",
47
+ "linear_attention",
48
+ "full_attention",
49
+ "linear_attention",
50
+ "linear_attention",
51
+ "linear_attention",
52
+ "full_attention",
53
+ "linear_attention",
54
+ "linear_attention",
55
+ "linear_attention",
56
+ "full_attention",
57
+ "linear_attention",
58
+ "linear_attention",
59
+ "linear_attention",
60
+ "full_attention",
61
+ "linear_attention",
62
+ "linear_attention",
63
+ "linear_attention",
64
+ "full_attention"
65
+ ],
66
+ "linear_conv_kernel_dim": 4,
67
+ "linear_key_head_dim": 128,
68
+ "linear_num_key_heads": 16,
69
+ "linear_num_value_heads": 32,
70
+ "linear_value_head_dim": 128,
71
+ "mamba_ssm_dtype": "float32",
72
+ "max_position_embeddings": 262144,
73
+ "model_type": "qwen3_5_moe_text",
74
+ "moe_intermediate_size": 512,
75
+ "mtp_num_hidden_layers": 1,
76
+ "mtp_use_dedicated_embeddings": false,
77
+ "num_attention_heads": 16,
78
+ "num_experts": 256,
79
+ "num_experts_per_tok": 8,
80
+ "num_hidden_layers": 40,
81
+ "num_key_value_heads": 2,
82
+ "output_router_logits": false,
83
+ "pad_token_id": null,
84
+ "partial_rotary_factor": 0.25,
85
+ "rms_norm_eps": 1e-06,
86
+ "rope_parameters": {
87
+ "mrope_interleaved": true,
88
+ "mrope_section": [
89
+ 11,
90
+ 11,
91
+ 10
92
+ ],
93
+ "partial_rotary_factor": 0.25,
94
+ "rope_theta": 10000000,
95
+ "rope_type": "default"
96
+ },
97
+ "router_aux_loss_coef": 0.001,
98
+ "shared_expert_intermediate_size": 512,
99
+ "tie_word_embeddings": false,
100
+ "use_cache": true,
101
+ "vocab_size": 248320
102
+ },
103
+ "tie_word_embeddings": false,
104
+ "unsloth_version": "2026.4.1",
105
+ "use_cache": false,
106
+ "video_token_id": 248057,
107
+ "vision_config": {
108
+ "deepstack_visual_indexes": [],
109
+ "depth": 27,
110
+ "torch_dtype": "bfloat16",
111
+ "hidden_act": "gelu_pytorch_tanh",
112
+ "hidden_size": 1152,
113
+ "in_channels": 3,
114
+ "initializer_range": 0.02,
115
+ "intermediate_size": 4304,
116
+ "model_type": "qwen3_5_moe",
117
+ "num_heads": 16,
118
+ "num_position_embeddings": 2304,
119
+ "out_hidden_size": 2048,
120
+ "patch_size": 16,
121
+ "spatial_merge_size": 2,
122
+ "temporal_patch_size": 2
123
+ },
124
+ "vision_end_token_id": 248054,
125
+ "vision_start_token_id": 248053
126
+ }
model-00001-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adee7bcb930aed22e0677e58d4873b48dadb1ed8001cb5c6a0487286eadb3478
3
+ size 3996199712
model-00002-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88f2dfd2b9e73e4b70be533dbf61bcfa3c9a0003758900fcbc9d9b96f5751d4b
3
+ size 1284907696
model-00003-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4bcbe6420a532c2d2a30657b4ff3f0f4c6d40843aaebfbd50d68be3818e11294
3
+ size 3357898360
model-00004-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12d7db38689ba3c8af74b23ef8523eca41e0cd95db870583d0663a3ee8a6bd60
3
+ size 3370808712
model-00005-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4fcf07b8954e56eda92cf1103d7ef5727484bdfc323f5eef8bf5763b0609db1
3
+ size 3357898360
model-00006-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9080d718e9c5f9e337443225aa417d4c24d00ae7995d76ee3f1cc296b557d15
3
+ size 3959424904
model-00007-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8c05e23131b1dd45a455ec38cfac7db14667358268623c3938d00cf3e959a68
3
+ size 1096788232
model-00008-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:062546a6aac87eed6d5e18438845ae711d9178db0d6eb27929249a7d9104b480
3
+ size 3946842008
model-00009-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a31a954bb72d1c714e751bf0aabf2ff533f5a509693ebf7dd22ad6e90be46f67
3
+ size 1096460848
model-00010-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02b560690e2c05c1e07ec52902aabd0cd64480a85e29cbc717abde29821fcc92
3
+ size 3946841992
model-00011-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7180392817fe3ecb3a27a1da43b7ff22c1a94806bac49975f9f122c3126df675
3
+ size 1096460752
model-00012-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b47c5a65265b8b1b1f4d550a11587d67f66e39c98f7798e56c502c3ef30a67f
3
+ size 3409971080
model-00013-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33a20fb20a21379bf43c84a43105f9c0cc35bd50d740b1c302dcbe4b700f5425
3
+ size 1633331664
model-00014-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be823e33c5cb6120ad3769d081f34a2449dc2358041fca7c29d636c1ba19130d
3
+ size 3422553872
model-00015-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a89d547c6f9d0b535ee5ea2f2478f163089539f3f0dd330cb23d278a19d76123
3
+ size 1633659224
model-00016-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f71c93f4987f42636aebe49df96fe307ab3834fbe8ebf39a4ec5e4533fc9ea4
3
+ size 3946842136
model-00017-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e356e3943cf3852b76bb8992e674f3256013e27d54b78e8250514151cdc29637
3
+ size 1096460608
model-00018-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b45f0474bee313ce40f36cd986c026bf00ec5f521d84cbd03b3a4599b9b5e0d0
3
+ size 3946841992
model-00019-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:708644ad34f1de727bf484f396944d8ec628645d52c183e9a992e65671685e21
3
+ size 1096460808
model-00020-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef3969e9e20e86e8e10e963dc1ace1a6cdcbef3d31a2750d53857c9163a49b5e
3
+ size 3409971072
model-00021-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ada4ae48f3d48fe01b4c53f2f82bce25e798a9631fd33959c881156fef2ccbce
3
+ size 1633331744
model-00022-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:def207fb42d7db31efb512755557763c23233c6e4d4c433027cb5102a7bce2f7
3
+ size 3370808752
model-00023-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fa5fba3bcd66378c5e8bebd0ba1f13aec4a11137aedd0e09903e399709d0741
3
+ size 3357898392
model-00024-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:391acd27420cdce5935ff18152423c70620d19dac3c39a5ef1a81d369f82d737
3
+ size 3370808752
model-00025-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:778e7f76602f05042b69ba7f3ec91f1fdffef390540b16074041c258fb81d154
3
+ size 3832888256
model-00026-of-00026.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:690fc9b473621944ebb86cba5808f7c835206a50ed581b6d5175a936d038c51d
3
+ size 2231416848
model.safetensors.index.json ADDED
@@ -0,0 +1,1052 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 71903645408.0
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00026-of-00026.safetensors",
7
+ "model.language_model.embed_tokens.weight": "model-00001-of-00026.safetensors",
8
+ "model.language_model.layers.0.input_layernorm.weight": "model-00002-of-00026.safetensors",
9
+ "model.language_model.layers.0.linear_attn.A_log": "model-00002-of-00026.safetensors",
10
+ "model.language_model.layers.0.linear_attn.conv1d.weight": "model-00002-of-00026.safetensors",
11
+ "model.language_model.layers.0.linear_attn.dt_bias": "model-00002-of-00026.safetensors",
12
+ "model.language_model.layers.0.linear_attn.in_proj_a.weight": "model-00002-of-00026.safetensors",
13
+ "model.language_model.layers.0.linear_attn.in_proj_b.weight": "model-00002-of-00026.safetensors",
14
+ "model.language_model.layers.0.linear_attn.in_proj_qkv.weight": "model-00001-of-00026.safetensors",
15
+ "model.language_model.layers.0.linear_attn.in_proj_z.weight": "model-00001-of-00026.safetensors",
16
+ "model.language_model.layers.0.linear_attn.norm.weight": "model-00002-of-00026.safetensors",
17
+ "model.language_model.layers.0.linear_attn.out_proj.weight": "model-00001-of-00026.safetensors",
18
+ "model.language_model.layers.0.mlp.experts.down_proj": "model-00002-of-00026.safetensors",
19
+ "model.language_model.layers.0.mlp.experts.gate_up_proj": "model-00001-of-00026.safetensors",
20
+ "model.language_model.layers.0.mlp.gate.weight": "model-00002-of-00026.safetensors",
21
+ "model.language_model.layers.0.mlp.shared_expert.down_proj.weight": "model-00002-of-00026.safetensors",
22
+ "model.language_model.layers.0.mlp.shared_expert.gate_proj.weight": "model-00002-of-00026.safetensors",
23
+ "model.language_model.layers.0.mlp.shared_expert.up_proj.weight": "model-00002-of-00026.safetensors",
24
+ "model.language_model.layers.0.mlp.shared_expert_gate.weight": "model-00002-of-00026.safetensors",
25
+ "model.language_model.layers.0.post_attention_layernorm.weight": "model-00002-of-00026.safetensors",
26
+ "model.language_model.layers.1.input_layernorm.weight": "model-00002-of-00026.safetensors",
27
+ "model.language_model.layers.1.linear_attn.A_log": "model-00002-of-00026.safetensors",
28
+ "model.language_model.layers.1.linear_attn.conv1d.weight": "model-00002-of-00026.safetensors",
29
+ "model.language_model.layers.1.linear_attn.dt_bias": "model-00002-of-00026.safetensors",
30
+ "model.language_model.layers.1.linear_attn.in_proj_a.weight": "model-00002-of-00026.safetensors",
31
+ "model.language_model.layers.1.linear_attn.in_proj_b.weight": "model-00002-of-00026.safetensors",
32
+ "model.language_model.layers.1.linear_attn.in_proj_qkv.weight": "model-00001-of-00026.safetensors",
33
+ "model.language_model.layers.1.linear_attn.in_proj_z.weight": "model-00001-of-00026.safetensors",
34
+ "model.language_model.layers.1.linear_attn.norm.weight": "model-00002-of-00026.safetensors",
35
+ "model.language_model.layers.1.linear_attn.out_proj.weight": "model-00001-of-00026.safetensors",
36
+ "model.language_model.layers.1.mlp.experts.down_proj": "model-00001-of-00026.safetensors",
37
+ "model.language_model.layers.1.mlp.experts.gate_up_proj": "model-00001-of-00026.safetensors",
38
+ "model.language_model.layers.1.mlp.gate.weight": "model-00002-of-00026.safetensors",
39
+ "model.language_model.layers.1.mlp.shared_expert.down_proj.weight": "model-00002-of-00026.safetensors",
40
+ "model.language_model.layers.1.mlp.shared_expert.gate_proj.weight": "model-00002-of-00026.safetensors",
41
+ "model.language_model.layers.1.mlp.shared_expert.up_proj.weight": "model-00002-of-00026.safetensors",
42
+ "model.language_model.layers.1.mlp.shared_expert_gate.weight": "model-00002-of-00026.safetensors",
43
+ "model.language_model.layers.1.post_attention_layernorm.weight": "model-00002-of-00026.safetensors",
44
+ "model.language_model.layers.10.input_layernorm.weight": "model-00007-of-00026.safetensors",
45
+ "model.language_model.layers.10.linear_attn.A_log": "model-00007-of-00026.safetensors",
46
+ "model.language_model.layers.10.linear_attn.conv1d.weight": "model-00007-of-00026.safetensors",
47
+ "model.language_model.layers.10.linear_attn.dt_bias": "model-00007-of-00026.safetensors",
48
+ "model.language_model.layers.10.linear_attn.in_proj_a.weight": "model-00007-of-00026.safetensors",
49
+ "model.language_model.layers.10.linear_attn.in_proj_b.weight": "model-00007-of-00026.safetensors",
50
+ "model.language_model.layers.10.linear_attn.in_proj_qkv.weight": "model-00006-of-00026.safetensors",
51
+ "model.language_model.layers.10.linear_attn.in_proj_z.weight": "model-00006-of-00026.safetensors",
52
+ "model.language_model.layers.10.linear_attn.norm.weight": "model-00007-of-00026.safetensors",
53
+ "model.language_model.layers.10.linear_attn.out_proj.weight": "model-00006-of-00026.safetensors",
54
+ "model.language_model.layers.10.mlp.experts.down_proj": "model-00006-of-00026.safetensors",
55
+ "model.language_model.layers.10.mlp.experts.gate_up_proj": "model-00007-of-00026.safetensors",
56
+ "model.language_model.layers.10.mlp.gate.weight": "model-00007-of-00026.safetensors",
57
+ "model.language_model.layers.10.mlp.shared_expert.down_proj.weight": "model-00007-of-00026.safetensors",
58
+ "model.language_model.layers.10.mlp.shared_expert.gate_proj.weight": "model-00007-of-00026.safetensors",
59
+ "model.language_model.layers.10.mlp.shared_expert.up_proj.weight": "model-00007-of-00026.safetensors",
60
+ "model.language_model.layers.10.mlp.shared_expert_gate.weight": "model-00007-of-00026.safetensors",
61
+ "model.language_model.layers.10.post_attention_layernorm.weight": "model-00007-of-00026.safetensors",
62
+ "model.language_model.layers.11.input_layernorm.weight": "model-00009-of-00026.safetensors",
63
+ "model.language_model.layers.11.mlp.experts.down_proj": "model-00009-of-00026.safetensors",
64
+ "model.language_model.layers.11.mlp.experts.gate_up_proj": "model-00008-of-00026.safetensors",
65
+ "model.language_model.layers.11.mlp.gate.weight": "model-00009-of-00026.safetensors",
66
+ "model.language_model.layers.11.mlp.shared_expert.down_proj.weight": "model-00009-of-00026.safetensors",
67
+ "model.language_model.layers.11.mlp.shared_expert.gate_proj.weight": "model-00009-of-00026.safetensors",
68
+ "model.language_model.layers.11.mlp.shared_expert.up_proj.weight": "model-00009-of-00026.safetensors",
69
+ "model.language_model.layers.11.mlp.shared_expert_gate.weight": "model-00009-of-00026.safetensors",
70
+ "model.language_model.layers.11.post_attention_layernorm.weight": "model-00009-of-00026.safetensors",
71
+ "model.language_model.layers.11.self_attn.k_norm.weight": "model-00009-of-00026.safetensors",
72
+ "model.language_model.layers.11.self_attn.k_proj.weight": "model-00008-of-00026.safetensors",
73
+ "model.language_model.layers.11.self_attn.o_proj.weight": "model-00008-of-00026.safetensors",
74
+ "model.language_model.layers.11.self_attn.q_norm.weight": "model-00009-of-00026.safetensors",
75
+ "model.language_model.layers.11.self_attn.q_proj.weight": "model-00008-of-00026.safetensors",
76
+ "model.language_model.layers.11.self_attn.v_proj.weight": "model-00008-of-00026.safetensors",
77
+ "model.language_model.layers.12.input_layernorm.weight": "model-00009-of-00026.safetensors",
78
+ "model.language_model.layers.12.linear_attn.A_log": "model-00009-of-00026.safetensors",
79
+ "model.language_model.layers.12.linear_attn.conv1d.weight": "model-00009-of-00026.safetensors",
80
+ "model.language_model.layers.12.linear_attn.dt_bias": "model-00009-of-00026.safetensors",
81
+ "model.language_model.layers.12.linear_attn.in_proj_a.weight": "model-00009-of-00026.safetensors",
82
+ "model.language_model.layers.12.linear_attn.in_proj_b.weight": "model-00009-of-00026.safetensors",
83
+ "model.language_model.layers.12.linear_attn.in_proj_qkv.weight": "model-00008-of-00026.safetensors",
84
+ "model.language_model.layers.12.linear_attn.in_proj_z.weight": "model-00008-of-00026.safetensors",
85
+ "model.language_model.layers.12.linear_attn.norm.weight": "model-00009-of-00026.safetensors",
86
+ "model.language_model.layers.12.linear_attn.out_proj.weight": "model-00008-of-00026.safetensors",
87
+ "model.language_model.layers.12.mlp.experts.down_proj": "model-00009-of-00026.safetensors",
88
+ "model.language_model.layers.12.mlp.experts.gate_up_proj": "model-00008-of-00026.safetensors",
89
+ "model.language_model.layers.12.mlp.gate.weight": "model-00009-of-00026.safetensors",
90
+ "model.language_model.layers.12.mlp.shared_expert.down_proj.weight": "model-00009-of-00026.safetensors",
91
+ "model.language_model.layers.12.mlp.shared_expert.gate_proj.weight": "model-00009-of-00026.safetensors",
92
+ "model.language_model.layers.12.mlp.shared_expert.up_proj.weight": "model-00009-of-00026.safetensors",
93
+ "model.language_model.layers.12.mlp.shared_expert_gate.weight": "model-00009-of-00026.safetensors",
94
+ "model.language_model.layers.12.post_attention_layernorm.weight": "model-00009-of-00026.safetensors",
95
+ "model.language_model.layers.13.input_layernorm.weight": "model-00009-of-00026.safetensors",
96
+ "model.language_model.layers.13.linear_attn.A_log": "model-00009-of-00026.safetensors",
97
+ "model.language_model.layers.13.linear_attn.conv1d.weight": "model-00009-of-00026.safetensors",
98
+ "model.language_model.layers.13.linear_attn.dt_bias": "model-00009-of-00026.safetensors",
99
+ "model.language_model.layers.13.linear_attn.in_proj_a.weight": "model-00009-of-00026.safetensors",
100
+ "model.language_model.layers.13.linear_attn.in_proj_b.weight": "model-00009-of-00026.safetensors",
101
+ "model.language_model.layers.13.linear_attn.in_proj_qkv.weight": "model-00008-of-00026.safetensors",
102
+ "model.language_model.layers.13.linear_attn.in_proj_z.weight": "model-00008-of-00026.safetensors",
103
+ "model.language_model.layers.13.linear_attn.norm.weight": "model-00009-of-00026.safetensors",
104
+ "model.language_model.layers.13.linear_attn.out_proj.weight": "model-00008-of-00026.safetensors",
105
+ "model.language_model.layers.13.mlp.experts.down_proj": "model-00008-of-00026.safetensors",
106
+ "model.language_model.layers.13.mlp.experts.gate_up_proj": "model-00008-of-00026.safetensors",
107
+ "model.language_model.layers.13.mlp.gate.weight": "model-00009-of-00026.safetensors",
108
+ "model.language_model.layers.13.mlp.shared_expert.down_proj.weight": "model-00009-of-00026.safetensors",
109
+ "model.language_model.layers.13.mlp.shared_expert.gate_proj.weight": "model-00009-of-00026.safetensors",
110
+ "model.language_model.layers.13.mlp.shared_expert.up_proj.weight": "model-00009-of-00026.safetensors",
111
+ "model.language_model.layers.13.mlp.shared_expert_gate.weight": "model-00009-of-00026.safetensors",
112
+ "model.language_model.layers.13.post_attention_layernorm.weight": "model-00009-of-00026.safetensors",
113
+ "model.language_model.layers.14.input_layernorm.weight": "model-00011-of-00026.safetensors",
114
+ "model.language_model.layers.14.linear_attn.A_log": "model-00011-of-00026.safetensors",
115
+ "model.language_model.layers.14.linear_attn.conv1d.weight": "model-00011-of-00026.safetensors",
116
+ "model.language_model.layers.14.linear_attn.dt_bias": "model-00011-of-00026.safetensors",
117
+ "model.language_model.layers.14.linear_attn.in_proj_a.weight": "model-00011-of-00026.safetensors",
118
+ "model.language_model.layers.14.linear_attn.in_proj_b.weight": "model-00011-of-00026.safetensors",
119
+ "model.language_model.layers.14.linear_attn.in_proj_qkv.weight": "model-00010-of-00026.safetensors",
120
+ "model.language_model.layers.14.linear_attn.in_proj_z.weight": "model-00010-of-00026.safetensors",
121
+ "model.language_model.layers.14.linear_attn.norm.weight": "model-00011-of-00026.safetensors",
122
+ "model.language_model.layers.14.linear_attn.out_proj.weight": "model-00010-of-00026.safetensors",
123
+ "model.language_model.layers.14.mlp.experts.down_proj": "model-00010-of-00026.safetensors",
124
+ "model.language_model.layers.14.mlp.experts.gate_up_proj": "model-00010-of-00026.safetensors",
125
+ "model.language_model.layers.14.mlp.gate.weight": "model-00011-of-00026.safetensors",
126
+ "model.language_model.layers.14.mlp.shared_expert.down_proj.weight": "model-00011-of-00026.safetensors",
127
+ "model.language_model.layers.14.mlp.shared_expert.gate_proj.weight": "model-00011-of-00026.safetensors",
128
+ "model.language_model.layers.14.mlp.shared_expert.up_proj.weight": "model-00011-of-00026.safetensors",
129
+ "model.language_model.layers.14.mlp.shared_expert_gate.weight": "model-00011-of-00026.safetensors",
130
+ "model.language_model.layers.14.post_attention_layernorm.weight": "model-00011-of-00026.safetensors",
131
+ "model.language_model.layers.15.input_layernorm.weight": "model-00011-of-00026.safetensors",
132
+ "model.language_model.layers.15.mlp.experts.down_proj": "model-00011-of-00026.safetensors",
133
+ "model.language_model.layers.15.mlp.experts.gate_up_proj": "model-00010-of-00026.safetensors",
134
+ "model.language_model.layers.15.mlp.gate.weight": "model-00011-of-00026.safetensors",
135
+ "model.language_model.layers.15.mlp.shared_expert.down_proj.weight": "model-00011-of-00026.safetensors",
136
+ "model.language_model.layers.15.mlp.shared_expert.gate_proj.weight": "model-00011-of-00026.safetensors",
137
+ "model.language_model.layers.15.mlp.shared_expert.up_proj.weight": "model-00011-of-00026.safetensors",
138
+ "model.language_model.layers.15.mlp.shared_expert_gate.weight": "model-00011-of-00026.safetensors",
139
+ "model.language_model.layers.15.post_attention_layernorm.weight": "model-00011-of-00026.safetensors",
140
+ "model.language_model.layers.15.self_attn.k_norm.weight": "model-00011-of-00026.safetensors",
141
+ "model.language_model.layers.15.self_attn.k_proj.weight": "model-00010-of-00026.safetensors",
142
+ "model.language_model.layers.15.self_attn.o_proj.weight": "model-00010-of-00026.safetensors",
143
+ "model.language_model.layers.15.self_attn.q_norm.weight": "model-00011-of-00026.safetensors",
144
+ "model.language_model.layers.15.self_attn.q_proj.weight": "model-00010-of-00026.safetensors",
145
+ "model.language_model.layers.15.self_attn.v_proj.weight": "model-00010-of-00026.safetensors",
146
+ "model.language_model.layers.16.input_layernorm.weight": "model-00011-of-00026.safetensors",
147
+ "model.language_model.layers.16.linear_attn.A_log": "model-00011-of-00026.safetensors",
148
+ "model.language_model.layers.16.linear_attn.conv1d.weight": "model-00011-of-00026.safetensors",
149
+ "model.language_model.layers.16.linear_attn.dt_bias": "model-00011-of-00026.safetensors",
150
+ "model.language_model.layers.16.linear_attn.in_proj_a.weight": "model-00011-of-00026.safetensors",
151
+ "model.language_model.layers.16.linear_attn.in_proj_b.weight": "model-00011-of-00026.safetensors",
152
+ "model.language_model.layers.16.linear_attn.in_proj_qkv.weight": "model-00010-of-00026.safetensors",
153
+ "model.language_model.layers.16.linear_attn.in_proj_z.weight": "model-00010-of-00026.safetensors",
154
+ "model.language_model.layers.16.linear_attn.norm.weight": "model-00011-of-00026.safetensors",
155
+ "model.language_model.layers.16.linear_attn.out_proj.weight": "model-00010-of-00026.safetensors",
156
+ "model.language_model.layers.16.mlp.experts.down_proj": "model-00011-of-00026.safetensors",
157
+ "model.language_model.layers.16.mlp.experts.gate_up_proj": "model-00010-of-00026.safetensors",
158
+ "model.language_model.layers.16.mlp.gate.weight": "model-00011-of-00026.safetensors",
159
+ "model.language_model.layers.16.mlp.shared_expert.down_proj.weight": "model-00011-of-00026.safetensors",
160
+ "model.language_model.layers.16.mlp.shared_expert.gate_proj.weight": "model-00011-of-00026.safetensors",
161
+ "model.language_model.layers.16.mlp.shared_expert.up_proj.weight": "model-00011-of-00026.safetensors",
162
+ "model.language_model.layers.16.mlp.shared_expert_gate.weight": "model-00011-of-00026.safetensors",
163
+ "model.language_model.layers.16.post_attention_layernorm.weight": "model-00011-of-00026.safetensors",
164
+ "model.language_model.layers.17.input_layernorm.weight": "model-00013-of-00026.safetensors",
165
+ "model.language_model.layers.17.linear_attn.A_log": "model-00013-of-00026.safetensors",
166
+ "model.language_model.layers.17.linear_attn.conv1d.weight": "model-00013-of-00026.safetensors",
167
+ "model.language_model.layers.17.linear_attn.dt_bias": "model-00013-of-00026.safetensors",
168
+ "model.language_model.layers.17.linear_attn.in_proj_a.weight": "model-00013-of-00026.safetensors",
169
+ "model.language_model.layers.17.linear_attn.in_proj_b.weight": "model-00013-of-00026.safetensors",
170
+ "model.language_model.layers.17.linear_attn.in_proj_qkv.weight": "model-00012-of-00026.safetensors",
171
+ "model.language_model.layers.17.linear_attn.in_proj_z.weight": "model-00012-of-00026.safetensors",
172
+ "model.language_model.layers.17.linear_attn.norm.weight": "model-00013-of-00026.safetensors",
173
+ "model.language_model.layers.17.linear_attn.out_proj.weight": "model-00012-of-00026.safetensors",
174
+ "model.language_model.layers.17.mlp.experts.down_proj": "model-00012-of-00026.safetensors",
175
+ "model.language_model.layers.17.mlp.experts.gate_up_proj": "model-00012-of-00026.safetensors",
176
+ "model.language_model.layers.17.mlp.gate.weight": "model-00013-of-00026.safetensors",
177
+ "model.language_model.layers.17.mlp.shared_expert.down_proj.weight": "model-00013-of-00026.safetensors",
178
+ "model.language_model.layers.17.mlp.shared_expert.gate_proj.weight": "model-00013-of-00026.safetensors",
179
+ "model.language_model.layers.17.mlp.shared_expert.up_proj.weight": "model-00013-of-00026.safetensors",
180
+ "model.language_model.layers.17.mlp.shared_expert_gate.weight": "model-00013-of-00026.safetensors",
181
+ "model.language_model.layers.17.post_attention_layernorm.weight": "model-00013-of-00026.safetensors",
182
+ "model.language_model.layers.18.input_layernorm.weight": "model-00013-of-00026.safetensors",
183
+ "model.language_model.layers.18.linear_attn.A_log": "model-00013-of-00026.safetensors",
184
+ "model.language_model.layers.18.linear_attn.conv1d.weight": "model-00013-of-00026.safetensors",
185
+ "model.language_model.layers.18.linear_attn.dt_bias": "model-00013-of-00026.safetensors",
186
+ "model.language_model.layers.18.linear_attn.in_proj_a.weight": "model-00013-of-00026.safetensors",
187
+ "model.language_model.layers.18.linear_attn.in_proj_b.weight": "model-00013-of-00026.safetensors",
188
+ "model.language_model.layers.18.linear_attn.in_proj_qkv.weight": "model-00012-of-00026.safetensors",
189
+ "model.language_model.layers.18.linear_attn.in_proj_z.weight": "model-00012-of-00026.safetensors",
190
+ "model.language_model.layers.18.linear_attn.norm.weight": "model-00013-of-00026.safetensors",
191
+ "model.language_model.layers.18.linear_attn.out_proj.weight": "model-00012-of-00026.safetensors",
192
+ "model.language_model.layers.18.mlp.experts.down_proj": "model-00012-of-00026.safetensors",
193
+ "model.language_model.layers.18.mlp.experts.gate_up_proj": "model-00013-of-00026.safetensors",
194
+ "model.language_model.layers.18.mlp.gate.weight": "model-00013-of-00026.safetensors",
195
+ "model.language_model.layers.18.mlp.shared_expert.down_proj.weight": "model-00013-of-00026.safetensors",
196
+ "model.language_model.layers.18.mlp.shared_expert.gate_proj.weight": "model-00013-of-00026.safetensors",
197
+ "model.language_model.layers.18.mlp.shared_expert.up_proj.weight": "model-00013-of-00026.safetensors",
198
+ "model.language_model.layers.18.mlp.shared_expert_gate.weight": "model-00013-of-00026.safetensors",
199
+ "model.language_model.layers.18.post_attention_layernorm.weight": "model-00013-of-00026.safetensors",
200
+ "model.language_model.layers.19.input_layernorm.weight": "model-00013-of-00026.safetensors",
201
+ "model.language_model.layers.19.mlp.experts.down_proj": "model-00013-of-00026.safetensors",
202
+ "model.language_model.layers.19.mlp.experts.gate_up_proj": "model-00012-of-00026.safetensors",
203
+ "model.language_model.layers.19.mlp.gate.weight": "model-00013-of-00026.safetensors",
204
+ "model.language_model.layers.19.mlp.shared_expert.down_proj.weight": "model-00013-of-00026.safetensors",
205
+ "model.language_model.layers.19.mlp.shared_expert.gate_proj.weight": "model-00013-of-00026.safetensors",
206
+ "model.language_model.layers.19.mlp.shared_expert.up_proj.weight": "model-00013-of-00026.safetensors",
207
+ "model.language_model.layers.19.mlp.shared_expert_gate.weight": "model-00013-of-00026.safetensors",
208
+ "model.language_model.layers.19.post_attention_layernorm.weight": "model-00013-of-00026.safetensors",
209
+ "model.language_model.layers.19.self_attn.k_norm.weight": "model-00013-of-00026.safetensors",
210
+ "model.language_model.layers.19.self_attn.k_proj.weight": "model-00012-of-00026.safetensors",
211
+ "model.language_model.layers.19.self_attn.o_proj.weight": "model-00012-of-00026.safetensors",
212
+ "model.language_model.layers.19.self_attn.q_norm.weight": "model-00013-of-00026.safetensors",
213
+ "model.language_model.layers.19.self_attn.q_proj.weight": "model-00012-of-00026.safetensors",
214
+ "model.language_model.layers.19.self_attn.v_proj.weight": "model-00012-of-00026.safetensors",
215
+ "model.language_model.layers.2.input_layernorm.weight": "model-00003-of-00026.safetensors",
216
+ "model.language_model.layers.2.linear_attn.A_log": "model-00003-of-00026.safetensors",
217
+ "model.language_model.layers.2.linear_attn.conv1d.weight": "model-00003-of-00026.safetensors",
218
+ "model.language_model.layers.2.linear_attn.dt_bias": "model-00003-of-00026.safetensors",
219
+ "model.language_model.layers.2.linear_attn.in_proj_a.weight": "model-00003-of-00026.safetensors",
220
+ "model.language_model.layers.2.linear_attn.in_proj_b.weight": "model-00003-of-00026.safetensors",
221
+ "model.language_model.layers.2.linear_attn.in_proj_qkv.weight": "model-00003-of-00026.safetensors",
222
+ "model.language_model.layers.2.linear_attn.in_proj_z.weight": "model-00003-of-00026.safetensors",
223
+ "model.language_model.layers.2.linear_attn.norm.weight": "model-00003-of-00026.safetensors",
224
+ "model.language_model.layers.2.linear_attn.out_proj.weight": "model-00003-of-00026.safetensors",
225
+ "model.language_model.layers.2.mlp.experts.down_proj": "model-00003-of-00026.safetensors",
226
+ "model.language_model.layers.2.mlp.experts.gate_up_proj": "model-00003-of-00026.safetensors",
227
+ "model.language_model.layers.2.mlp.gate.weight": "model-00003-of-00026.safetensors",
228
+ "model.language_model.layers.2.mlp.shared_expert.down_proj.weight": "model-00003-of-00026.safetensors",
229
+ "model.language_model.layers.2.mlp.shared_expert.gate_proj.weight": "model-00003-of-00026.safetensors",
230
+ "model.language_model.layers.2.mlp.shared_expert.up_proj.weight": "model-00003-of-00026.safetensors",
231
+ "model.language_model.layers.2.mlp.shared_expert_gate.weight": "model-00003-of-00026.safetensors",
232
+ "model.language_model.layers.2.post_attention_layernorm.weight": "model-00003-of-00026.safetensors",
233
+ "model.language_model.layers.20.input_layernorm.weight": "model-00015-of-00026.safetensors",
234
+ "model.language_model.layers.20.linear_attn.A_log": "model-00015-of-00026.safetensors",
235
+ "model.language_model.layers.20.linear_attn.conv1d.weight": "model-00015-of-00026.safetensors",
236
+ "model.language_model.layers.20.linear_attn.dt_bias": "model-00015-of-00026.safetensors",
237
+ "model.language_model.layers.20.linear_attn.in_proj_a.weight": "model-00015-of-00026.safetensors",
238
+ "model.language_model.layers.20.linear_attn.in_proj_b.weight": "model-00015-of-00026.safetensors",
239
+ "model.language_model.layers.20.linear_attn.in_proj_qkv.weight": "model-00014-of-00026.safetensors",
240
+ "model.language_model.layers.20.linear_attn.in_proj_z.weight": "model-00014-of-00026.safetensors",
241
+ "model.language_model.layers.20.linear_attn.norm.weight": "model-00015-of-00026.safetensors",
242
+ "model.language_model.layers.20.linear_attn.out_proj.weight": "model-00014-of-00026.safetensors",
243
+ "model.language_model.layers.20.mlp.experts.down_proj": "model-00014-of-00026.safetensors",
244
+ "model.language_model.layers.20.mlp.experts.gate_up_proj": "model-00014-of-00026.safetensors",
245
+ "model.language_model.layers.20.mlp.gate.weight": "model-00015-of-00026.safetensors",
246
+ "model.language_model.layers.20.mlp.shared_expert.down_proj.weight": "model-00015-of-00026.safetensors",
247
+ "model.language_model.layers.20.mlp.shared_expert.gate_proj.weight": "model-00015-of-00026.safetensors",
248
+ "model.language_model.layers.20.mlp.shared_expert.up_proj.weight": "model-00015-of-00026.safetensors",
249
+ "model.language_model.layers.20.mlp.shared_expert_gate.weight": "model-00015-of-00026.safetensors",
250
+ "model.language_model.layers.20.post_attention_layernorm.weight": "model-00015-of-00026.safetensors",
251
+ "model.language_model.layers.21.input_layernorm.weight": "model-00015-of-00026.safetensors",
252
+ "model.language_model.layers.21.linear_attn.A_log": "model-00015-of-00026.safetensors",
253
+ "model.language_model.layers.21.linear_attn.conv1d.weight": "model-00015-of-00026.safetensors",
254
+ "model.language_model.layers.21.linear_attn.dt_bias": "model-00015-of-00026.safetensors",
255
+ "model.language_model.layers.21.linear_attn.in_proj_a.weight": "model-00015-of-00026.safetensors",
256
+ "model.language_model.layers.21.linear_attn.in_proj_b.weight": "model-00015-of-00026.safetensors",
257
+ "model.language_model.layers.21.linear_attn.in_proj_qkv.weight": "model-00014-of-00026.safetensors",
258
+ "model.language_model.layers.21.linear_attn.in_proj_z.weight": "model-00014-of-00026.safetensors",
259
+ "model.language_model.layers.21.linear_attn.norm.weight": "model-00015-of-00026.safetensors",
260
+ "model.language_model.layers.21.linear_attn.out_proj.weight": "model-00014-of-00026.safetensors",
261
+ "model.language_model.layers.21.mlp.experts.down_proj": "model-00014-of-00026.safetensors",
262
+ "model.language_model.layers.21.mlp.experts.gate_up_proj": "model-00015-of-00026.safetensors",
263
+ "model.language_model.layers.21.mlp.gate.weight": "model-00015-of-00026.safetensors",
264
+ "model.language_model.layers.21.mlp.shared_expert.down_proj.weight": "model-00015-of-00026.safetensors",
265
+ "model.language_model.layers.21.mlp.shared_expert.gate_proj.weight": "model-00015-of-00026.safetensors",
266
+ "model.language_model.layers.21.mlp.shared_expert.up_proj.weight": "model-00015-of-00026.safetensors",
267
+ "model.language_model.layers.21.mlp.shared_expert_gate.weight": "model-00015-of-00026.safetensors",
268
+ "model.language_model.layers.21.post_attention_layernorm.weight": "model-00015-of-00026.safetensors",
269
+ "model.language_model.layers.22.input_layernorm.weight": "model-00015-of-00026.safetensors",
270
+ "model.language_model.layers.22.linear_attn.A_log": "model-00015-of-00026.safetensors",
271
+ "model.language_model.layers.22.linear_attn.conv1d.weight": "model-00015-of-00026.safetensors",
272
+ "model.language_model.layers.22.linear_attn.dt_bias": "model-00015-of-00026.safetensors",
273
+ "model.language_model.layers.22.linear_attn.in_proj_a.weight": "model-00015-of-00026.safetensors",
274
+ "model.language_model.layers.22.linear_attn.in_proj_b.weight": "model-00015-of-00026.safetensors",
275
+ "model.language_model.layers.22.linear_attn.in_proj_qkv.weight": "model-00014-of-00026.safetensors",
276
+ "model.language_model.layers.22.linear_attn.in_proj_z.weight": "model-00014-of-00026.safetensors",
277
+ "model.language_model.layers.22.linear_attn.norm.weight": "model-00015-of-00026.safetensors",
278
+ "model.language_model.layers.22.linear_attn.out_proj.weight": "model-00014-of-00026.safetensors",
279
+ "model.language_model.layers.22.mlp.experts.down_proj": "model-00015-of-00026.safetensors",
280
+ "model.language_model.layers.22.mlp.experts.gate_up_proj": "model-00014-of-00026.safetensors",
281
+ "model.language_model.layers.22.mlp.gate.weight": "model-00015-of-00026.safetensors",
282
+ "model.language_model.layers.22.mlp.shared_expert.down_proj.weight": "model-00015-of-00026.safetensors",
283
+ "model.language_model.layers.22.mlp.shared_expert.gate_proj.weight": "model-00015-of-00026.safetensors",
284
+ "model.language_model.layers.22.mlp.shared_expert.up_proj.weight": "model-00015-of-00026.safetensors",
285
+ "model.language_model.layers.22.mlp.shared_expert_gate.weight": "model-00015-of-00026.safetensors",
286
+ "model.language_model.layers.22.post_attention_layernorm.weight": "model-00015-of-00026.safetensors",
287
+ "model.language_model.layers.23.input_layernorm.weight": "model-00017-of-00026.safetensors",
288
+ "model.language_model.layers.23.mlp.experts.down_proj": "model-00016-of-00026.safetensors",
289
+ "model.language_model.layers.23.mlp.experts.gate_up_proj": "model-00016-of-00026.safetensors",
290
+ "model.language_model.layers.23.mlp.gate.weight": "model-00017-of-00026.safetensors",
291
+ "model.language_model.layers.23.mlp.shared_expert.down_proj.weight": "model-00017-of-00026.safetensors",
292
+ "model.language_model.layers.23.mlp.shared_expert.gate_proj.weight": "model-00017-of-00026.safetensors",
293
+ "model.language_model.layers.23.mlp.shared_expert.up_proj.weight": "model-00017-of-00026.safetensors",
294
+ "model.language_model.layers.23.mlp.shared_expert_gate.weight": "model-00017-of-00026.safetensors",
295
+ "model.language_model.layers.23.post_attention_layernorm.weight": "model-00017-of-00026.safetensors",
296
+ "model.language_model.layers.23.self_attn.k_norm.weight": "model-00017-of-00026.safetensors",
297
+ "model.language_model.layers.23.self_attn.k_proj.weight": "model-00016-of-00026.safetensors",
298
+ "model.language_model.layers.23.self_attn.o_proj.weight": "model-00016-of-00026.safetensors",
299
+ "model.language_model.layers.23.self_attn.q_norm.weight": "model-00017-of-00026.safetensors",
300
+ "model.language_model.layers.23.self_attn.q_proj.weight": "model-00016-of-00026.safetensors",
301
+ "model.language_model.layers.23.self_attn.v_proj.weight": "model-00016-of-00026.safetensors",
302
+ "model.language_model.layers.24.input_layernorm.weight": "model-00017-of-00026.safetensors",
303
+ "model.language_model.layers.24.linear_attn.A_log": "model-00017-of-00026.safetensors",
304
+ "model.language_model.layers.24.linear_attn.conv1d.weight": "model-00017-of-00026.safetensors",
305
+ "model.language_model.layers.24.linear_attn.dt_bias": "model-00017-of-00026.safetensors",
306
+ "model.language_model.layers.24.linear_attn.in_proj_a.weight": "model-00017-of-00026.safetensors",
307
+ "model.language_model.layers.24.linear_attn.in_proj_b.weight": "model-00017-of-00026.safetensors",
308
+ "model.language_model.layers.24.linear_attn.in_proj_qkv.weight": "model-00016-of-00026.safetensors",
309
+ "model.language_model.layers.24.linear_attn.in_proj_z.weight": "model-00016-of-00026.safetensors",
310
+ "model.language_model.layers.24.linear_attn.norm.weight": "model-00017-of-00026.safetensors",
311
+ "model.language_model.layers.24.linear_attn.out_proj.weight": "model-00016-of-00026.safetensors",
312
+ "model.language_model.layers.24.mlp.experts.down_proj": "model-00016-of-00026.safetensors",
313
+ "model.language_model.layers.24.mlp.experts.gate_up_proj": "model-00016-of-00026.safetensors",
314
+ "model.language_model.layers.24.mlp.gate.weight": "model-00017-of-00026.safetensors",
315
+ "model.language_model.layers.24.mlp.shared_expert.down_proj.weight": "model-00017-of-00026.safetensors",
316
+ "model.language_model.layers.24.mlp.shared_expert.gate_proj.weight": "model-00017-of-00026.safetensors",
317
+ "model.language_model.layers.24.mlp.shared_expert.up_proj.weight": "model-00017-of-00026.safetensors",
318
+ "model.language_model.layers.24.mlp.shared_expert_gate.weight": "model-00017-of-00026.safetensors",
319
+ "model.language_model.layers.24.post_attention_layernorm.weight": "model-00017-of-00026.safetensors",
320
+ "model.language_model.layers.25.input_layernorm.weight": "model-00017-of-00026.safetensors",
321
+ "model.language_model.layers.25.linear_attn.A_log": "model-00017-of-00026.safetensors",
322
+ "model.language_model.layers.25.linear_attn.conv1d.weight": "model-00017-of-00026.safetensors",
323
+ "model.language_model.layers.25.linear_attn.dt_bias": "model-00017-of-00026.safetensors",
324
+ "model.language_model.layers.25.linear_attn.in_proj_a.weight": "model-00017-of-00026.safetensors",
325
+ "model.language_model.layers.25.linear_attn.in_proj_b.weight": "model-00017-of-00026.safetensors",
326
+ "model.language_model.layers.25.linear_attn.in_proj_qkv.weight": "model-00016-of-00026.safetensors",
327
+ "model.language_model.layers.25.linear_attn.in_proj_z.weight": "model-00016-of-00026.safetensors",
328
+ "model.language_model.layers.25.linear_attn.norm.weight": "model-00017-of-00026.safetensors",
329
+ "model.language_model.layers.25.linear_attn.out_proj.weight": "model-00016-of-00026.safetensors",
330
+ "model.language_model.layers.25.mlp.experts.down_proj": "model-00016-of-00026.safetensors",
331
+ "model.language_model.layers.25.mlp.experts.gate_up_proj": "model-00017-of-00026.safetensors",
332
+ "model.language_model.layers.25.mlp.gate.weight": "model-00017-of-00026.safetensors",
333
+ "model.language_model.layers.25.mlp.shared_expert.down_proj.weight": "model-00017-of-00026.safetensors",
334
+ "model.language_model.layers.25.mlp.shared_expert.gate_proj.weight": "model-00017-of-00026.safetensors",
335
+ "model.language_model.layers.25.mlp.shared_expert.up_proj.weight": "model-00017-of-00026.safetensors",
336
+ "model.language_model.layers.25.mlp.shared_expert_gate.weight": "model-00017-of-00026.safetensors",
337
+ "model.language_model.layers.25.post_attention_layernorm.weight": "model-00017-of-00026.safetensors",
338
+ "model.language_model.layers.26.input_layernorm.weight": "model-00019-of-00026.safetensors",
339
+ "model.language_model.layers.26.linear_attn.A_log": "model-00019-of-00026.safetensors",
340
+ "model.language_model.layers.26.linear_attn.conv1d.weight": "model-00019-of-00026.safetensors",
341
+ "model.language_model.layers.26.linear_attn.dt_bias": "model-00019-of-00026.safetensors",
342
+ "model.language_model.layers.26.linear_attn.in_proj_a.weight": "model-00019-of-00026.safetensors",
343
+ "model.language_model.layers.26.linear_attn.in_proj_b.weight": "model-00019-of-00026.safetensors",
344
+ "model.language_model.layers.26.linear_attn.in_proj_qkv.weight": "model-00018-of-00026.safetensors",
345
+ "model.language_model.layers.26.linear_attn.in_proj_z.weight": "model-00018-of-00026.safetensors",
346
+ "model.language_model.layers.26.linear_attn.norm.weight": "model-00019-of-00026.safetensors",
347
+ "model.language_model.layers.26.linear_attn.out_proj.weight": "model-00018-of-00026.safetensors",
348
+ "model.language_model.layers.26.mlp.experts.down_proj": "model-00019-of-00026.safetensors",
349
+ "model.language_model.layers.26.mlp.experts.gate_up_proj": "model-00018-of-00026.safetensors",
350
+ "model.language_model.layers.26.mlp.gate.weight": "model-00019-of-00026.safetensors",
351
+ "model.language_model.layers.26.mlp.shared_expert.down_proj.weight": "model-00019-of-00026.safetensors",
352
+ "model.language_model.layers.26.mlp.shared_expert.gate_proj.weight": "model-00019-of-00026.safetensors",
353
+ "model.language_model.layers.26.mlp.shared_expert.up_proj.weight": "model-00019-of-00026.safetensors",
354
+ "model.language_model.layers.26.mlp.shared_expert_gate.weight": "model-00019-of-00026.safetensors",
355
+ "model.language_model.layers.26.post_attention_layernorm.weight": "model-00019-of-00026.safetensors",
356
+ "model.language_model.layers.27.input_layernorm.weight": "model-00019-of-00026.safetensors",
357
+ "model.language_model.layers.27.mlp.experts.down_proj": "model-00019-of-00026.safetensors",
358
+ "model.language_model.layers.27.mlp.experts.gate_up_proj": "model-00018-of-00026.safetensors",
359
+ "model.language_model.layers.27.mlp.gate.weight": "model-00019-of-00026.safetensors",
360
+ "model.language_model.layers.27.mlp.shared_expert.down_proj.weight": "model-00019-of-00026.safetensors",
361
+ "model.language_model.layers.27.mlp.shared_expert.gate_proj.weight": "model-00019-of-00026.safetensors",
362
+ "model.language_model.layers.27.mlp.shared_expert.up_proj.weight": "model-00019-of-00026.safetensors",
363
+ "model.language_model.layers.27.mlp.shared_expert_gate.weight": "model-00019-of-00026.safetensors",
364
+ "model.language_model.layers.27.post_attention_layernorm.weight": "model-00019-of-00026.safetensors",
365
+ "model.language_model.layers.27.self_attn.k_norm.weight": "model-00019-of-00026.safetensors",
366
+ "model.language_model.layers.27.self_attn.k_proj.weight": "model-00018-of-00026.safetensors",
367
+ "model.language_model.layers.27.self_attn.o_proj.weight": "model-00018-of-00026.safetensors",
368
+ "model.language_model.layers.27.self_attn.q_norm.weight": "model-00019-of-00026.safetensors",
369
+ "model.language_model.layers.27.self_attn.q_proj.weight": "model-00018-of-00026.safetensors",
370
+ "model.language_model.layers.27.self_attn.v_proj.weight": "model-00018-of-00026.safetensors",
371
+ "model.language_model.layers.28.input_layernorm.weight": "model-00019-of-00026.safetensors",
372
+ "model.language_model.layers.28.linear_attn.A_log": "model-00019-of-00026.safetensors",
373
+ "model.language_model.layers.28.linear_attn.conv1d.weight": "model-00019-of-00026.safetensors",
374
+ "model.language_model.layers.28.linear_attn.dt_bias": "model-00019-of-00026.safetensors",
375
+ "model.language_model.layers.28.linear_attn.in_proj_a.weight": "model-00019-of-00026.safetensors",
376
+ "model.language_model.layers.28.linear_attn.in_proj_b.weight": "model-00019-of-00026.safetensors",
377
+ "model.language_model.layers.28.linear_attn.in_proj_qkv.weight": "model-00018-of-00026.safetensors",
378
+ "model.language_model.layers.28.linear_attn.in_proj_z.weight": "model-00018-of-00026.safetensors",
379
+ "model.language_model.layers.28.linear_attn.norm.weight": "model-00019-of-00026.safetensors",
380
+ "model.language_model.layers.28.linear_attn.out_proj.weight": "model-00018-of-00026.safetensors",
381
+ "model.language_model.layers.28.mlp.experts.down_proj": "model-00018-of-00026.safetensors",
382
+ "model.language_model.layers.28.mlp.experts.gate_up_proj": "model-00018-of-00026.safetensors",
383
+ "model.language_model.layers.28.mlp.gate.weight": "model-00019-of-00026.safetensors",
384
+ "model.language_model.layers.28.mlp.shared_expert.down_proj.weight": "model-00019-of-00026.safetensors",
385
+ "model.language_model.layers.28.mlp.shared_expert.gate_proj.weight": "model-00019-of-00026.safetensors",
386
+ "model.language_model.layers.28.mlp.shared_expert.up_proj.weight": "model-00019-of-00026.safetensors",
387
+ "model.language_model.layers.28.mlp.shared_expert_gate.weight": "model-00019-of-00026.safetensors",
388
+ "model.language_model.layers.28.post_attention_layernorm.weight": "model-00019-of-00026.safetensors",
389
+ "model.language_model.layers.29.input_layernorm.weight": "model-00021-of-00026.safetensors",
390
+ "model.language_model.layers.29.linear_attn.A_log": "model-00021-of-00026.safetensors",
391
+ "model.language_model.layers.29.linear_attn.conv1d.weight": "model-00021-of-00026.safetensors",
392
+ "model.language_model.layers.29.linear_attn.dt_bias": "model-00021-of-00026.safetensors",
393
+ "model.language_model.layers.29.linear_attn.in_proj_a.weight": "model-00021-of-00026.safetensors",
394
+ "model.language_model.layers.29.linear_attn.in_proj_b.weight": "model-00021-of-00026.safetensors",
395
+ "model.language_model.layers.29.linear_attn.in_proj_qkv.weight": "model-00020-of-00026.safetensors",
396
+ "model.language_model.layers.29.linear_attn.in_proj_z.weight": "model-00020-of-00026.safetensors",
397
+ "model.language_model.layers.29.linear_attn.norm.weight": "model-00021-of-00026.safetensors",
398
+ "model.language_model.layers.29.linear_attn.out_proj.weight": "model-00020-of-00026.safetensors",
399
+ "model.language_model.layers.29.mlp.experts.down_proj": "model-00020-of-00026.safetensors",
400
+ "model.language_model.layers.29.mlp.experts.gate_up_proj": "model-00021-of-00026.safetensors",
401
+ "model.language_model.layers.29.mlp.gate.weight": "model-00021-of-00026.safetensors",
402
+ "model.language_model.layers.29.mlp.shared_expert.down_proj.weight": "model-00021-of-00026.safetensors",
403
+ "model.language_model.layers.29.mlp.shared_expert.gate_proj.weight": "model-00021-of-00026.safetensors",
404
+ "model.language_model.layers.29.mlp.shared_expert.up_proj.weight": "model-00021-of-00026.safetensors",
405
+ "model.language_model.layers.29.mlp.shared_expert_gate.weight": "model-00021-of-00026.safetensors",
406
+ "model.language_model.layers.29.post_attention_layernorm.weight": "model-00021-of-00026.safetensors",
407
+ "model.language_model.layers.3.input_layernorm.weight": "model-00003-of-00026.safetensors",
408
+ "model.language_model.layers.3.mlp.experts.down_proj": "model-00003-of-00026.safetensors",
409
+ "model.language_model.layers.3.mlp.experts.gate_up_proj": "model-00003-of-00026.safetensors",
410
+ "model.language_model.layers.3.mlp.gate.weight": "model-00003-of-00026.safetensors",
411
+ "model.language_model.layers.3.mlp.shared_expert.down_proj.weight": "model-00003-of-00026.safetensors",
412
+ "model.language_model.layers.3.mlp.shared_expert.gate_proj.weight": "model-00003-of-00026.safetensors",
413
+ "model.language_model.layers.3.mlp.shared_expert.up_proj.weight": "model-00003-of-00026.safetensors",
414
+ "model.language_model.layers.3.mlp.shared_expert_gate.weight": "model-00003-of-00026.safetensors",
415
+ "model.language_model.layers.3.post_attention_layernorm.weight": "model-00003-of-00026.safetensors",
416
+ "model.language_model.layers.3.self_attn.k_norm.weight": "model-00003-of-00026.safetensors",
417
+ "model.language_model.layers.3.self_attn.k_proj.weight": "model-00003-of-00026.safetensors",
418
+ "model.language_model.layers.3.self_attn.o_proj.weight": "model-00003-of-00026.safetensors",
419
+ "model.language_model.layers.3.self_attn.q_norm.weight": "model-00003-of-00026.safetensors",
420
+ "model.language_model.layers.3.self_attn.q_proj.weight": "model-00003-of-00026.safetensors",
421
+ "model.language_model.layers.3.self_attn.v_proj.weight": "model-00003-of-00026.safetensors",
422
+ "model.language_model.layers.30.input_layernorm.weight": "model-00021-of-00026.safetensors",
423
+ "model.language_model.layers.30.linear_attn.A_log": "model-00021-of-00026.safetensors",
424
+ "model.language_model.layers.30.linear_attn.conv1d.weight": "model-00021-of-00026.safetensors",
425
+ "model.language_model.layers.30.linear_attn.dt_bias": "model-00021-of-00026.safetensors",
426
+ "model.language_model.layers.30.linear_attn.in_proj_a.weight": "model-00021-of-00026.safetensors",
427
+ "model.language_model.layers.30.linear_attn.in_proj_b.weight": "model-00021-of-00026.safetensors",
428
+ "model.language_model.layers.30.linear_attn.in_proj_qkv.weight": "model-00020-of-00026.safetensors",
429
+ "model.language_model.layers.30.linear_attn.in_proj_z.weight": "model-00020-of-00026.safetensors",
430
+ "model.language_model.layers.30.linear_attn.norm.weight": "model-00021-of-00026.safetensors",
431
+ "model.language_model.layers.30.linear_attn.out_proj.weight": "model-00020-of-00026.safetensors",
432
+ "model.language_model.layers.30.mlp.experts.down_proj": "model-00021-of-00026.safetensors",
433
+ "model.language_model.layers.30.mlp.experts.gate_up_proj": "model-00020-of-00026.safetensors",
434
+ "model.language_model.layers.30.mlp.gate.weight": "model-00021-of-00026.safetensors",
435
+ "model.language_model.layers.30.mlp.shared_expert.down_proj.weight": "model-00021-of-00026.safetensors",
436
+ "model.language_model.layers.30.mlp.shared_expert.gate_proj.weight": "model-00021-of-00026.safetensors",
437
+ "model.language_model.layers.30.mlp.shared_expert.up_proj.weight": "model-00021-of-00026.safetensors",
438
+ "model.language_model.layers.30.mlp.shared_expert_gate.weight": "model-00021-of-00026.safetensors",
439
+ "model.language_model.layers.30.post_attention_layernorm.weight": "model-00021-of-00026.safetensors",
440
+ "model.language_model.layers.31.input_layernorm.weight": "model-00021-of-00026.safetensors",
441
+ "model.language_model.layers.31.mlp.experts.down_proj": "model-00020-of-00026.safetensors",
442
+ "model.language_model.layers.31.mlp.experts.gate_up_proj": "model-00020-of-00026.safetensors",
443
+ "model.language_model.layers.31.mlp.gate.weight": "model-00021-of-00026.safetensors",
444
+ "model.language_model.layers.31.mlp.shared_expert.down_proj.weight": "model-00021-of-00026.safetensors",
445
+ "model.language_model.layers.31.mlp.shared_expert.gate_proj.weight": "model-00021-of-00026.safetensors",
446
+ "model.language_model.layers.31.mlp.shared_expert.up_proj.weight": "model-00021-of-00026.safetensors",
447
+ "model.language_model.layers.31.mlp.shared_expert_gate.weight": "model-00021-of-00026.safetensors",
448
+ "model.language_model.layers.31.post_attention_layernorm.weight": "model-00021-of-00026.safetensors",
449
+ "model.language_model.layers.31.self_attn.k_norm.weight": "model-00021-of-00026.safetensors",
450
+ "model.language_model.layers.31.self_attn.k_proj.weight": "model-00020-of-00026.safetensors",
451
+ "model.language_model.layers.31.self_attn.o_proj.weight": "model-00020-of-00026.safetensors",
452
+ "model.language_model.layers.31.self_attn.q_norm.weight": "model-00021-of-00026.safetensors",
453
+ "model.language_model.layers.31.self_attn.q_proj.weight": "model-00020-of-00026.safetensors",
454
+ "model.language_model.layers.31.self_attn.v_proj.weight": "model-00020-of-00026.safetensors",
455
+ "model.language_model.layers.32.input_layernorm.weight": "model-00022-of-00026.safetensors",
456
+ "model.language_model.layers.32.linear_attn.A_log": "model-00022-of-00026.safetensors",
457
+ "model.language_model.layers.32.linear_attn.conv1d.weight": "model-00022-of-00026.safetensors",
458
+ "model.language_model.layers.32.linear_attn.dt_bias": "model-00022-of-00026.safetensors",
459
+ "model.language_model.layers.32.linear_attn.in_proj_a.weight": "model-00022-of-00026.safetensors",
460
+ "model.language_model.layers.32.linear_attn.in_proj_b.weight": "model-00022-of-00026.safetensors",
461
+ "model.language_model.layers.32.linear_attn.in_proj_qkv.weight": "model-00022-of-00026.safetensors",
462
+ "model.language_model.layers.32.linear_attn.in_proj_z.weight": "model-00022-of-00026.safetensors",
463
+ "model.language_model.layers.32.linear_attn.norm.weight": "model-00022-of-00026.safetensors",
464
+ "model.language_model.layers.32.linear_attn.out_proj.weight": "model-00022-of-00026.safetensors",
465
+ "model.language_model.layers.32.mlp.experts.down_proj": "model-00022-of-00026.safetensors",
466
+ "model.language_model.layers.32.mlp.experts.gate_up_proj": "model-00022-of-00026.safetensors",
467
+ "model.language_model.layers.32.mlp.gate.weight": "model-00022-of-00026.safetensors",
468
+ "model.language_model.layers.32.mlp.shared_expert.down_proj.weight": "model-00022-of-00026.safetensors",
469
+ "model.language_model.layers.32.mlp.shared_expert.gate_proj.weight": "model-00022-of-00026.safetensors",
470
+ "model.language_model.layers.32.mlp.shared_expert.up_proj.weight": "model-00022-of-00026.safetensors",
471
+ "model.language_model.layers.32.mlp.shared_expert_gate.weight": "model-00022-of-00026.safetensors",
472
+ "model.language_model.layers.32.post_attention_layernorm.weight": "model-00022-of-00026.safetensors",
473
+ "model.language_model.layers.33.input_layernorm.weight": "model-00022-of-00026.safetensors",
474
+ "model.language_model.layers.33.linear_attn.A_log": "model-00022-of-00026.safetensors",
475
+ "model.language_model.layers.33.linear_attn.conv1d.weight": "model-00022-of-00026.safetensors",
476
+ "model.language_model.layers.33.linear_attn.dt_bias": "model-00022-of-00026.safetensors",
477
+ "model.language_model.layers.33.linear_attn.in_proj_a.weight": "model-00022-of-00026.safetensors",
478
+ "model.language_model.layers.33.linear_attn.in_proj_b.weight": "model-00022-of-00026.safetensors",
479
+ "model.language_model.layers.33.linear_attn.in_proj_qkv.weight": "model-00022-of-00026.safetensors",
480
+ "model.language_model.layers.33.linear_attn.in_proj_z.weight": "model-00022-of-00026.safetensors",
481
+ "model.language_model.layers.33.linear_attn.norm.weight": "model-00022-of-00026.safetensors",
482
+ "model.language_model.layers.33.linear_attn.out_proj.weight": "model-00022-of-00026.safetensors",
483
+ "model.language_model.layers.33.mlp.experts.down_proj": "model-00022-of-00026.safetensors",
484
+ "model.language_model.layers.33.mlp.experts.gate_up_proj": "model-00022-of-00026.safetensors",
485
+ "model.language_model.layers.33.mlp.gate.weight": "model-00022-of-00026.safetensors",
486
+ "model.language_model.layers.33.mlp.shared_expert.down_proj.weight": "model-00022-of-00026.safetensors",
487
+ "model.language_model.layers.33.mlp.shared_expert.gate_proj.weight": "model-00022-of-00026.safetensors",
488
+ "model.language_model.layers.33.mlp.shared_expert.up_proj.weight": "model-00022-of-00026.safetensors",
489
+ "model.language_model.layers.33.mlp.shared_expert_gate.weight": "model-00022-of-00026.safetensors",
490
+ "model.language_model.layers.33.post_attention_layernorm.weight": "model-00022-of-00026.safetensors",
491
+ "model.language_model.layers.34.input_layernorm.weight": "model-00023-of-00026.safetensors",
492
+ "model.language_model.layers.34.linear_attn.A_log": "model-00023-of-00026.safetensors",
493
+ "model.language_model.layers.34.linear_attn.conv1d.weight": "model-00023-of-00026.safetensors",
494
+ "model.language_model.layers.34.linear_attn.dt_bias": "model-00023-of-00026.safetensors",
495
+ "model.language_model.layers.34.linear_attn.in_proj_a.weight": "model-00023-of-00026.safetensors",
496
+ "model.language_model.layers.34.linear_attn.in_proj_b.weight": "model-00023-of-00026.safetensors",
497
+ "model.language_model.layers.34.linear_attn.in_proj_qkv.weight": "model-00023-of-00026.safetensors",
498
+ "model.language_model.layers.34.linear_attn.in_proj_z.weight": "model-00023-of-00026.safetensors",
499
+ "model.language_model.layers.34.linear_attn.norm.weight": "model-00023-of-00026.safetensors",
500
+ "model.language_model.layers.34.linear_attn.out_proj.weight": "model-00023-of-00026.safetensors",
501
+ "model.language_model.layers.34.mlp.experts.down_proj": "model-00023-of-00026.safetensors",
502
+ "model.language_model.layers.34.mlp.experts.gate_up_proj": "model-00023-of-00026.safetensors",
503
+ "model.language_model.layers.34.mlp.gate.weight": "model-00023-of-00026.safetensors",
504
+ "model.language_model.layers.34.mlp.shared_expert.down_proj.weight": "model-00023-of-00026.safetensors",
505
+ "model.language_model.layers.34.mlp.shared_expert.gate_proj.weight": "model-00023-of-00026.safetensors",
506
+ "model.language_model.layers.34.mlp.shared_expert.up_proj.weight": "model-00023-of-00026.safetensors",
507
+ "model.language_model.layers.34.mlp.shared_expert_gate.weight": "model-00023-of-00026.safetensors",
508
+ "model.language_model.layers.34.post_attention_layernorm.weight": "model-00023-of-00026.safetensors",
509
+ "model.language_model.layers.35.input_layernorm.weight": "model-00023-of-00026.safetensors",
510
+ "model.language_model.layers.35.mlp.experts.down_proj": "model-00023-of-00026.safetensors",
511
+ "model.language_model.layers.35.mlp.experts.gate_up_proj": "model-00023-of-00026.safetensors",
512
+ "model.language_model.layers.35.mlp.gate.weight": "model-00023-of-00026.safetensors",
513
+ "model.language_model.layers.35.mlp.shared_expert.down_proj.weight": "model-00023-of-00026.safetensors",
514
+ "model.language_model.layers.35.mlp.shared_expert.gate_proj.weight": "model-00023-of-00026.safetensors",
515
+ "model.language_model.layers.35.mlp.shared_expert.up_proj.weight": "model-00023-of-00026.safetensors",
516
+ "model.language_model.layers.35.mlp.shared_expert_gate.weight": "model-00023-of-00026.safetensors",
517
+ "model.language_model.layers.35.post_attention_layernorm.weight": "model-00023-of-00026.safetensors",
518
+ "model.language_model.layers.35.self_attn.k_norm.weight": "model-00023-of-00026.safetensors",
519
+ "model.language_model.layers.35.self_attn.k_proj.weight": "model-00023-of-00026.safetensors",
520
+ "model.language_model.layers.35.self_attn.o_proj.weight": "model-00023-of-00026.safetensors",
521
+ "model.language_model.layers.35.self_attn.q_norm.weight": "model-00023-of-00026.safetensors",
522
+ "model.language_model.layers.35.self_attn.q_proj.weight": "model-00023-of-00026.safetensors",
523
+ "model.language_model.layers.35.self_attn.v_proj.weight": "model-00023-of-00026.safetensors",
524
+ "model.language_model.layers.36.input_layernorm.weight": "model-00024-of-00026.safetensors",
525
+ "model.language_model.layers.36.linear_attn.A_log": "model-00024-of-00026.safetensors",
526
+ "model.language_model.layers.36.linear_attn.conv1d.weight": "model-00024-of-00026.safetensors",
527
+ "model.language_model.layers.36.linear_attn.dt_bias": "model-00024-of-00026.safetensors",
528
+ "model.language_model.layers.36.linear_attn.in_proj_a.weight": "model-00024-of-00026.safetensors",
529
+ "model.language_model.layers.36.linear_attn.in_proj_b.weight": "model-00024-of-00026.safetensors",
530
+ "model.language_model.layers.36.linear_attn.in_proj_qkv.weight": "model-00024-of-00026.safetensors",
531
+ "model.language_model.layers.36.linear_attn.in_proj_z.weight": "model-00024-of-00026.safetensors",
532
+ "model.language_model.layers.36.linear_attn.norm.weight": "model-00024-of-00026.safetensors",
533
+ "model.language_model.layers.36.linear_attn.out_proj.weight": "model-00024-of-00026.safetensors",
534
+ "model.language_model.layers.36.mlp.experts.down_proj": "model-00024-of-00026.safetensors",
535
+ "model.language_model.layers.36.mlp.experts.gate_up_proj": "model-00024-of-00026.safetensors",
536
+ "model.language_model.layers.36.mlp.gate.weight": "model-00024-of-00026.safetensors",
537
+ "model.language_model.layers.36.mlp.shared_expert.down_proj.weight": "model-00024-of-00026.safetensors",
538
+ "model.language_model.layers.36.mlp.shared_expert.gate_proj.weight": "model-00024-of-00026.safetensors",
539
+ "model.language_model.layers.36.mlp.shared_expert.up_proj.weight": "model-00024-of-00026.safetensors",
540
+ "model.language_model.layers.36.mlp.shared_expert_gate.weight": "model-00024-of-00026.safetensors",
541
+ "model.language_model.layers.36.post_attention_layernorm.weight": "model-00024-of-00026.safetensors",
542
+ "model.language_model.layers.37.input_layernorm.weight": "model-00024-of-00026.safetensors",
543
+ "model.language_model.layers.37.linear_attn.A_log": "model-00024-of-00026.safetensors",
544
+ "model.language_model.layers.37.linear_attn.conv1d.weight": "model-00024-of-00026.safetensors",
545
+ "model.language_model.layers.37.linear_attn.dt_bias": "model-00024-of-00026.safetensors",
546
+ "model.language_model.layers.37.linear_attn.in_proj_a.weight": "model-00024-of-00026.safetensors",
547
+ "model.language_model.layers.37.linear_attn.in_proj_b.weight": "model-00024-of-00026.safetensors",
548
+ "model.language_model.layers.37.linear_attn.in_proj_qkv.weight": "model-00024-of-00026.safetensors",
549
+ "model.language_model.layers.37.linear_attn.in_proj_z.weight": "model-00024-of-00026.safetensors",
550
+ "model.language_model.layers.37.linear_attn.norm.weight": "model-00024-of-00026.safetensors",
551
+ "model.language_model.layers.37.linear_attn.out_proj.weight": "model-00024-of-00026.safetensors",
552
+ "model.language_model.layers.37.mlp.experts.down_proj": "model-00024-of-00026.safetensors",
553
+ "model.language_model.layers.37.mlp.experts.gate_up_proj": "model-00024-of-00026.safetensors",
554
+ "model.language_model.layers.37.mlp.gate.weight": "model-00024-of-00026.safetensors",
555
+ "model.language_model.layers.37.mlp.shared_expert.down_proj.weight": "model-00024-of-00026.safetensors",
556
+ "model.language_model.layers.37.mlp.shared_expert.gate_proj.weight": "model-00024-of-00026.safetensors",
557
+ "model.language_model.layers.37.mlp.shared_expert.up_proj.weight": "model-00024-of-00026.safetensors",
558
+ "model.language_model.layers.37.mlp.shared_expert_gate.weight": "model-00024-of-00026.safetensors",
559
+ "model.language_model.layers.37.post_attention_layernorm.weight": "model-00024-of-00026.safetensors",
560
+ "model.language_model.layers.38.input_layernorm.weight": "model-00025-of-00026.safetensors",
561
+ "model.language_model.layers.38.linear_attn.A_log": "model-00025-of-00026.safetensors",
562
+ "model.language_model.layers.38.linear_attn.conv1d.weight": "model-00025-of-00026.safetensors",
563
+ "model.language_model.layers.38.linear_attn.dt_bias": "model-00025-of-00026.safetensors",
564
+ "model.language_model.layers.38.linear_attn.in_proj_a.weight": "model-00025-of-00026.safetensors",
565
+ "model.language_model.layers.38.linear_attn.in_proj_b.weight": "model-00025-of-00026.safetensors",
566
+ "model.language_model.layers.38.linear_attn.in_proj_qkv.weight": "model-00025-of-00026.safetensors",
567
+ "model.language_model.layers.38.linear_attn.in_proj_z.weight": "model-00025-of-00026.safetensors",
568
+ "model.language_model.layers.38.linear_attn.norm.weight": "model-00025-of-00026.safetensors",
569
+ "model.language_model.layers.38.linear_attn.out_proj.weight": "model-00025-of-00026.safetensors",
570
+ "model.language_model.layers.38.mlp.experts.down_proj": "model-00025-of-00026.safetensors",
571
+ "model.language_model.layers.38.mlp.experts.gate_up_proj": "model-00025-of-00026.safetensors",
572
+ "model.language_model.layers.38.mlp.gate.weight": "model-00025-of-00026.safetensors",
573
+ "model.language_model.layers.38.mlp.shared_expert.down_proj.weight": "model-00025-of-00026.safetensors",
574
+ "model.language_model.layers.38.mlp.shared_expert.gate_proj.weight": "model-00025-of-00026.safetensors",
575
+ "model.language_model.layers.38.mlp.shared_expert.up_proj.weight": "model-00025-of-00026.safetensors",
576
+ "model.language_model.layers.38.mlp.shared_expert_gate.weight": "model-00025-of-00026.safetensors",
577
+ "model.language_model.layers.38.post_attention_layernorm.weight": "model-00025-of-00026.safetensors",
578
+ "model.language_model.layers.39.input_layernorm.weight": "model-00026-of-00026.safetensors",
579
+ "model.language_model.layers.39.mlp.experts.down_proj": "model-00026-of-00026.safetensors",
580
+ "model.language_model.layers.39.mlp.experts.gate_up_proj": "model-00025-of-00026.safetensors",
581
+ "model.language_model.layers.39.mlp.gate.weight": "model-00026-of-00026.safetensors",
582
+ "model.language_model.layers.39.mlp.shared_expert.down_proj.weight": "model-00026-of-00026.safetensors",
583
+ "model.language_model.layers.39.mlp.shared_expert.gate_proj.weight": "model-00026-of-00026.safetensors",
584
+ "model.language_model.layers.39.mlp.shared_expert.up_proj.weight": "model-00026-of-00026.safetensors",
585
+ "model.language_model.layers.39.mlp.shared_expert_gate.weight": "model-00026-of-00026.safetensors",
586
+ "model.language_model.layers.39.post_attention_layernorm.weight": "model-00026-of-00026.safetensors",
587
+ "model.language_model.layers.39.self_attn.k_norm.weight": "model-00026-of-00026.safetensors",
588
+ "model.language_model.layers.39.self_attn.k_proj.weight": "model-00026-of-00026.safetensors",
589
+ "model.language_model.layers.39.self_attn.o_proj.weight": "model-00026-of-00026.safetensors",
590
+ "model.language_model.layers.39.self_attn.q_norm.weight": "model-00026-of-00026.safetensors",
591
+ "model.language_model.layers.39.self_attn.q_proj.weight": "model-00026-of-00026.safetensors",
592
+ "model.language_model.layers.39.self_attn.v_proj.weight": "model-00026-of-00026.safetensors",
593
+ "model.language_model.layers.4.input_layernorm.weight": "model-00004-of-00026.safetensors",
594
+ "model.language_model.layers.4.linear_attn.A_log": "model-00004-of-00026.safetensors",
595
+ "model.language_model.layers.4.linear_attn.conv1d.weight": "model-00004-of-00026.safetensors",
596
+ "model.language_model.layers.4.linear_attn.dt_bias": "model-00004-of-00026.safetensors",
597
+ "model.language_model.layers.4.linear_attn.in_proj_a.weight": "model-00004-of-00026.safetensors",
598
+ "model.language_model.layers.4.linear_attn.in_proj_b.weight": "model-00004-of-00026.safetensors",
599
+ "model.language_model.layers.4.linear_attn.in_proj_qkv.weight": "model-00004-of-00026.safetensors",
600
+ "model.language_model.layers.4.linear_attn.in_proj_z.weight": "model-00004-of-00026.safetensors",
601
+ "model.language_model.layers.4.linear_attn.norm.weight": "model-00004-of-00026.safetensors",
602
+ "model.language_model.layers.4.linear_attn.out_proj.weight": "model-00004-of-00026.safetensors",
603
+ "model.language_model.layers.4.mlp.experts.down_proj": "model-00004-of-00026.safetensors",
604
+ "model.language_model.layers.4.mlp.experts.gate_up_proj": "model-00004-of-00026.safetensors",
605
+ "model.language_model.layers.4.mlp.gate.weight": "model-00004-of-00026.safetensors",
606
+ "model.language_model.layers.4.mlp.shared_expert.down_proj.weight": "model-00004-of-00026.safetensors",
607
+ "model.language_model.layers.4.mlp.shared_expert.gate_proj.weight": "model-00004-of-00026.safetensors",
608
+ "model.language_model.layers.4.mlp.shared_expert.up_proj.weight": "model-00004-of-00026.safetensors",
609
+ "model.language_model.layers.4.mlp.shared_expert_gate.weight": "model-00004-of-00026.safetensors",
610
+ "model.language_model.layers.4.post_attention_layernorm.weight": "model-00004-of-00026.safetensors",
611
+ "model.language_model.layers.5.input_layernorm.weight": "model-00004-of-00026.safetensors",
612
+ "model.language_model.layers.5.linear_attn.A_log": "model-00004-of-00026.safetensors",
613
+ "model.language_model.layers.5.linear_attn.conv1d.weight": "model-00004-of-00026.safetensors",
614
+ "model.language_model.layers.5.linear_attn.dt_bias": "model-00004-of-00026.safetensors",
615
+ "model.language_model.layers.5.linear_attn.in_proj_a.weight": "model-00004-of-00026.safetensors",
616
+ "model.language_model.layers.5.linear_attn.in_proj_b.weight": "model-00004-of-00026.safetensors",
617
+ "model.language_model.layers.5.linear_attn.in_proj_qkv.weight": "model-00004-of-00026.safetensors",
618
+ "model.language_model.layers.5.linear_attn.in_proj_z.weight": "model-00004-of-00026.safetensors",
619
+ "model.language_model.layers.5.linear_attn.norm.weight": "model-00004-of-00026.safetensors",
620
+ "model.language_model.layers.5.linear_attn.out_proj.weight": "model-00004-of-00026.safetensors",
621
+ "model.language_model.layers.5.mlp.experts.down_proj": "model-00004-of-00026.safetensors",
622
+ "model.language_model.layers.5.mlp.experts.gate_up_proj": "model-00004-of-00026.safetensors",
623
+ "model.language_model.layers.5.mlp.gate.weight": "model-00004-of-00026.safetensors",
624
+ "model.language_model.layers.5.mlp.shared_expert.down_proj.weight": "model-00004-of-00026.safetensors",
625
+ "model.language_model.layers.5.mlp.shared_expert.gate_proj.weight": "model-00004-of-00026.safetensors",
626
+ "model.language_model.layers.5.mlp.shared_expert.up_proj.weight": "model-00004-of-00026.safetensors",
627
+ "model.language_model.layers.5.mlp.shared_expert_gate.weight": "model-00004-of-00026.safetensors",
628
+ "model.language_model.layers.5.post_attention_layernorm.weight": "model-00004-of-00026.safetensors",
629
+ "model.language_model.layers.6.input_layernorm.weight": "model-00005-of-00026.safetensors",
630
+ "model.language_model.layers.6.linear_attn.A_log": "model-00005-of-00026.safetensors",
631
+ "model.language_model.layers.6.linear_attn.conv1d.weight": "model-00005-of-00026.safetensors",
632
+ "model.language_model.layers.6.linear_attn.dt_bias": "model-00005-of-00026.safetensors",
633
+ "model.language_model.layers.6.linear_attn.in_proj_a.weight": "model-00005-of-00026.safetensors",
634
+ "model.language_model.layers.6.linear_attn.in_proj_b.weight": "model-00005-of-00026.safetensors",
635
+ "model.language_model.layers.6.linear_attn.in_proj_qkv.weight": "model-00005-of-00026.safetensors",
636
+ "model.language_model.layers.6.linear_attn.in_proj_z.weight": "model-00005-of-00026.safetensors",
637
+ "model.language_model.layers.6.linear_attn.norm.weight": "model-00005-of-00026.safetensors",
638
+ "model.language_model.layers.6.linear_attn.out_proj.weight": "model-00005-of-00026.safetensors",
639
+ "model.language_model.layers.6.mlp.experts.down_proj": "model-00005-of-00026.safetensors",
640
+ "model.language_model.layers.6.mlp.experts.gate_up_proj": "model-00005-of-00026.safetensors",
641
+ "model.language_model.layers.6.mlp.gate.weight": "model-00005-of-00026.safetensors",
642
+ "model.language_model.layers.6.mlp.shared_expert.down_proj.weight": "model-00005-of-00026.safetensors",
643
+ "model.language_model.layers.6.mlp.shared_expert.gate_proj.weight": "model-00005-of-00026.safetensors",
644
+ "model.language_model.layers.6.mlp.shared_expert.up_proj.weight": "model-00005-of-00026.safetensors",
645
+ "model.language_model.layers.6.mlp.shared_expert_gate.weight": "model-00005-of-00026.safetensors",
646
+ "model.language_model.layers.6.post_attention_layernorm.weight": "model-00005-of-00026.safetensors",
647
+ "model.language_model.layers.7.input_layernorm.weight": "model-00005-of-00026.safetensors",
648
+ "model.language_model.layers.7.mlp.experts.down_proj": "model-00005-of-00026.safetensors",
649
+ "model.language_model.layers.7.mlp.experts.gate_up_proj": "model-00005-of-00026.safetensors",
650
+ "model.language_model.layers.7.mlp.gate.weight": "model-00005-of-00026.safetensors",
651
+ "model.language_model.layers.7.mlp.shared_expert.down_proj.weight": "model-00005-of-00026.safetensors",
652
+ "model.language_model.layers.7.mlp.shared_expert.gate_proj.weight": "model-00005-of-00026.safetensors",
653
+ "model.language_model.layers.7.mlp.shared_expert.up_proj.weight": "model-00005-of-00026.safetensors",
654
+ "model.language_model.layers.7.mlp.shared_expert_gate.weight": "model-00005-of-00026.safetensors",
655
+ "model.language_model.layers.7.post_attention_layernorm.weight": "model-00005-of-00026.safetensors",
656
+ "model.language_model.layers.7.self_attn.k_norm.weight": "model-00005-of-00026.safetensors",
657
+ "model.language_model.layers.7.self_attn.k_proj.weight": "model-00005-of-00026.safetensors",
658
+ "model.language_model.layers.7.self_attn.o_proj.weight": "model-00005-of-00026.safetensors",
659
+ "model.language_model.layers.7.self_attn.q_norm.weight": "model-00005-of-00026.safetensors",
660
+ "model.language_model.layers.7.self_attn.q_proj.weight": "model-00005-of-00026.safetensors",
661
+ "model.language_model.layers.7.self_attn.v_proj.weight": "model-00005-of-00026.safetensors",
662
+ "model.language_model.layers.8.input_layernorm.weight": "model-00007-of-00026.safetensors",
663
+ "model.language_model.layers.8.linear_attn.A_log": "model-00007-of-00026.safetensors",
664
+ "model.language_model.layers.8.linear_attn.conv1d.weight": "model-00007-of-00026.safetensors",
665
+ "model.language_model.layers.8.linear_attn.dt_bias": "model-00007-of-00026.safetensors",
666
+ "model.language_model.layers.8.linear_attn.in_proj_a.weight": "model-00007-of-00026.safetensors",
667
+ "model.language_model.layers.8.linear_attn.in_proj_b.weight": "model-00007-of-00026.safetensors",
668
+ "model.language_model.layers.8.linear_attn.in_proj_qkv.weight": "model-00006-of-00026.safetensors",
669
+ "model.language_model.layers.8.linear_attn.in_proj_z.weight": "model-00006-of-00026.safetensors",
670
+ "model.language_model.layers.8.linear_attn.norm.weight": "model-00007-of-00026.safetensors",
671
+ "model.language_model.layers.8.linear_attn.out_proj.weight": "model-00006-of-00026.safetensors",
672
+ "model.language_model.layers.8.mlp.experts.down_proj": "model-00006-of-00026.safetensors",
673
+ "model.language_model.layers.8.mlp.experts.gate_up_proj": "model-00006-of-00026.safetensors",
674
+ "model.language_model.layers.8.mlp.gate.weight": "model-00007-of-00026.safetensors",
675
+ "model.language_model.layers.8.mlp.shared_expert.down_proj.weight": "model-00007-of-00026.safetensors",
676
+ "model.language_model.layers.8.mlp.shared_expert.gate_proj.weight": "model-00007-of-00026.safetensors",
677
+ "model.language_model.layers.8.mlp.shared_expert.up_proj.weight": "model-00007-of-00026.safetensors",
678
+ "model.language_model.layers.8.mlp.shared_expert_gate.weight": "model-00007-of-00026.safetensors",
679
+ "model.language_model.layers.8.post_attention_layernorm.weight": "model-00007-of-00026.safetensors",
680
+ "model.language_model.layers.9.input_layernorm.weight": "model-00007-of-00026.safetensors",
681
+ "model.language_model.layers.9.linear_attn.A_log": "model-00007-of-00026.safetensors",
682
+ "model.language_model.layers.9.linear_attn.conv1d.weight": "model-00007-of-00026.safetensors",
683
+ "model.language_model.layers.9.linear_attn.dt_bias": "model-00007-of-00026.safetensors",
684
+ "model.language_model.layers.9.linear_attn.in_proj_a.weight": "model-00007-of-00026.safetensors",
685
+ "model.language_model.layers.9.linear_attn.in_proj_b.weight": "model-00007-of-00026.safetensors",
686
+ "model.language_model.layers.9.linear_attn.in_proj_qkv.weight": "model-00006-of-00026.safetensors",
687
+ "model.language_model.layers.9.linear_attn.in_proj_z.weight": "model-00006-of-00026.safetensors",
688
+ "model.language_model.layers.9.linear_attn.norm.weight": "model-00007-of-00026.safetensors",
689
+ "model.language_model.layers.9.linear_attn.out_proj.weight": "model-00006-of-00026.safetensors",
690
+ "model.language_model.layers.9.mlp.experts.down_proj": "model-00006-of-00026.safetensors",
691
+ "model.language_model.layers.9.mlp.experts.gate_up_proj": "model-00006-of-00026.safetensors",
692
+ "model.language_model.layers.9.mlp.gate.weight": "model-00007-of-00026.safetensors",
693
+ "model.language_model.layers.9.mlp.shared_expert.down_proj.weight": "model-00007-of-00026.safetensors",
694
+ "model.language_model.layers.9.mlp.shared_expert.gate_proj.weight": "model-00007-of-00026.safetensors",
695
+ "model.language_model.layers.9.mlp.shared_expert.up_proj.weight": "model-00007-of-00026.safetensors",
696
+ "model.language_model.layers.9.mlp.shared_expert_gate.weight": "model-00007-of-00026.safetensors",
697
+ "model.language_model.layers.9.post_attention_layernorm.weight": "model-00007-of-00026.safetensors",
698
+ "model.language_model.norm.weight": "model-00026-of-00026.safetensors",
699
+ "model.visual.blocks.0.attn.proj.bias": "model-00002-of-00026.safetensors",
700
+ "model.visual.blocks.0.attn.proj.weight": "model-00002-of-00026.safetensors",
701
+ "model.visual.blocks.0.attn.qkv.bias": "model-00002-of-00026.safetensors",
702
+ "model.visual.blocks.0.attn.qkv.weight": "model-00002-of-00026.safetensors",
703
+ "model.visual.blocks.0.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
704
+ "model.visual.blocks.0.mlp.linear_fc1.weight": "model-00001-of-00026.safetensors",
705
+ "model.visual.blocks.0.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
706
+ "model.visual.blocks.0.mlp.linear_fc2.weight": "model-00001-of-00026.safetensors",
707
+ "model.visual.blocks.0.norm1.bias": "model-00002-of-00026.safetensors",
708
+ "model.visual.blocks.0.norm1.weight": "model-00002-of-00026.safetensors",
709
+ "model.visual.blocks.0.norm2.bias": "model-00002-of-00026.safetensors",
710
+ "model.visual.blocks.0.norm2.weight": "model-00002-of-00026.safetensors",
711
+ "model.visual.blocks.1.attn.proj.bias": "model-00002-of-00026.safetensors",
712
+ "model.visual.blocks.1.attn.proj.weight": "model-00002-of-00026.safetensors",
713
+ "model.visual.blocks.1.attn.qkv.bias": "model-00002-of-00026.safetensors",
714
+ "model.visual.blocks.1.attn.qkv.weight": "model-00002-of-00026.safetensors",
715
+ "model.visual.blocks.1.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
716
+ "model.visual.blocks.1.mlp.linear_fc1.weight": "model-00001-of-00026.safetensors",
717
+ "model.visual.blocks.1.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
718
+ "model.visual.blocks.1.mlp.linear_fc2.weight": "model-00001-of-00026.safetensors",
719
+ "model.visual.blocks.1.norm1.bias": "model-00002-of-00026.safetensors",
720
+ "model.visual.blocks.1.norm1.weight": "model-00002-of-00026.safetensors",
721
+ "model.visual.blocks.1.norm2.bias": "model-00002-of-00026.safetensors",
722
+ "model.visual.blocks.1.norm2.weight": "model-00002-of-00026.safetensors",
723
+ "model.visual.blocks.10.attn.proj.bias": "model-00002-of-00026.safetensors",
724
+ "model.visual.blocks.10.attn.proj.weight": "model-00002-of-00026.safetensors",
725
+ "model.visual.blocks.10.attn.qkv.bias": "model-00002-of-00026.safetensors",
726
+ "model.visual.blocks.10.attn.qkv.weight": "model-00002-of-00026.safetensors",
727
+ "model.visual.blocks.10.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
728
+ "model.visual.blocks.10.mlp.linear_fc1.weight": "model-00001-of-00026.safetensors",
729
+ "model.visual.blocks.10.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
730
+ "model.visual.blocks.10.mlp.linear_fc2.weight": "model-00001-of-00026.safetensors",
731
+ "model.visual.blocks.10.norm1.bias": "model-00002-of-00026.safetensors",
732
+ "model.visual.blocks.10.norm1.weight": "model-00002-of-00026.safetensors",
733
+ "model.visual.blocks.10.norm2.bias": "model-00002-of-00026.safetensors",
734
+ "model.visual.blocks.10.norm2.weight": "model-00002-of-00026.safetensors",
735
+ "model.visual.blocks.11.attn.proj.bias": "model-00002-of-00026.safetensors",
736
+ "model.visual.blocks.11.attn.proj.weight": "model-00002-of-00026.safetensors",
737
+ "model.visual.blocks.11.attn.qkv.bias": "model-00002-of-00026.safetensors",
738
+ "model.visual.blocks.11.attn.qkv.weight": "model-00002-of-00026.safetensors",
739
+ "model.visual.blocks.11.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
740
+ "model.visual.blocks.11.mlp.linear_fc1.weight": "model-00001-of-00026.safetensors",
741
+ "model.visual.blocks.11.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
742
+ "model.visual.blocks.11.mlp.linear_fc2.weight": "model-00001-of-00026.safetensors",
743
+ "model.visual.blocks.11.norm1.bias": "model-00002-of-00026.safetensors",
744
+ "model.visual.blocks.11.norm1.weight": "model-00002-of-00026.safetensors",
745
+ "model.visual.blocks.11.norm2.bias": "model-00002-of-00026.safetensors",
746
+ "model.visual.blocks.11.norm2.weight": "model-00002-of-00026.safetensors",
747
+ "model.visual.blocks.12.attn.proj.bias": "model-00002-of-00026.safetensors",
748
+ "model.visual.blocks.12.attn.proj.weight": "model-00002-of-00026.safetensors",
749
+ "model.visual.blocks.12.attn.qkv.bias": "model-00002-of-00026.safetensors",
750
+ "model.visual.blocks.12.attn.qkv.weight": "model-00002-of-00026.safetensors",
751
+ "model.visual.blocks.12.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
752
+ "model.visual.blocks.12.mlp.linear_fc1.weight": "model-00001-of-00026.safetensors",
753
+ "model.visual.blocks.12.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
754
+ "model.visual.blocks.12.mlp.linear_fc2.weight": "model-00001-of-00026.safetensors",
755
+ "model.visual.blocks.12.norm1.bias": "model-00002-of-00026.safetensors",
756
+ "model.visual.blocks.12.norm1.weight": "model-00002-of-00026.safetensors",
757
+ "model.visual.blocks.12.norm2.bias": "model-00002-of-00026.safetensors",
758
+ "model.visual.blocks.12.norm2.weight": "model-00002-of-00026.safetensors",
759
+ "model.visual.blocks.13.attn.proj.bias": "model-00002-of-00026.safetensors",
760
+ "model.visual.blocks.13.attn.proj.weight": "model-00002-of-00026.safetensors",
761
+ "model.visual.blocks.13.attn.qkv.bias": "model-00002-of-00026.safetensors",
762
+ "model.visual.blocks.13.attn.qkv.weight": "model-00002-of-00026.safetensors",
763
+ "model.visual.blocks.13.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
764
+ "model.visual.blocks.13.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
765
+ "model.visual.blocks.13.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
766
+ "model.visual.blocks.13.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
767
+ "model.visual.blocks.13.norm1.bias": "model-00002-of-00026.safetensors",
768
+ "model.visual.blocks.13.norm1.weight": "model-00002-of-00026.safetensors",
769
+ "model.visual.blocks.13.norm2.bias": "model-00002-of-00026.safetensors",
770
+ "model.visual.blocks.13.norm2.weight": "model-00002-of-00026.safetensors",
771
+ "model.visual.blocks.14.attn.proj.bias": "model-00002-of-00026.safetensors",
772
+ "model.visual.blocks.14.attn.proj.weight": "model-00002-of-00026.safetensors",
773
+ "model.visual.blocks.14.attn.qkv.bias": "model-00002-of-00026.safetensors",
774
+ "model.visual.blocks.14.attn.qkv.weight": "model-00002-of-00026.safetensors",
775
+ "model.visual.blocks.14.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
776
+ "model.visual.blocks.14.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
777
+ "model.visual.blocks.14.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
778
+ "model.visual.blocks.14.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
779
+ "model.visual.blocks.14.norm1.bias": "model-00002-of-00026.safetensors",
780
+ "model.visual.blocks.14.norm1.weight": "model-00002-of-00026.safetensors",
781
+ "model.visual.blocks.14.norm2.bias": "model-00002-of-00026.safetensors",
782
+ "model.visual.blocks.14.norm2.weight": "model-00002-of-00026.safetensors",
783
+ "model.visual.blocks.15.attn.proj.bias": "model-00002-of-00026.safetensors",
784
+ "model.visual.blocks.15.attn.proj.weight": "model-00002-of-00026.safetensors",
785
+ "model.visual.blocks.15.attn.qkv.bias": "model-00002-of-00026.safetensors",
786
+ "model.visual.blocks.15.attn.qkv.weight": "model-00002-of-00026.safetensors",
787
+ "model.visual.blocks.15.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
788
+ "model.visual.blocks.15.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
789
+ "model.visual.blocks.15.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
790
+ "model.visual.blocks.15.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
791
+ "model.visual.blocks.15.norm1.bias": "model-00002-of-00026.safetensors",
792
+ "model.visual.blocks.15.norm1.weight": "model-00002-of-00026.safetensors",
793
+ "model.visual.blocks.15.norm2.bias": "model-00002-of-00026.safetensors",
794
+ "model.visual.blocks.15.norm2.weight": "model-00002-of-00026.safetensors",
795
+ "model.visual.blocks.16.attn.proj.bias": "model-00002-of-00026.safetensors",
796
+ "model.visual.blocks.16.attn.proj.weight": "model-00002-of-00026.safetensors",
797
+ "model.visual.blocks.16.attn.qkv.bias": "model-00002-of-00026.safetensors",
798
+ "model.visual.blocks.16.attn.qkv.weight": "model-00002-of-00026.safetensors",
799
+ "model.visual.blocks.16.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
800
+ "model.visual.blocks.16.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
801
+ "model.visual.blocks.16.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
802
+ "model.visual.blocks.16.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
803
+ "model.visual.blocks.16.norm1.bias": "model-00002-of-00026.safetensors",
804
+ "model.visual.blocks.16.norm1.weight": "model-00002-of-00026.safetensors",
805
+ "model.visual.blocks.16.norm2.bias": "model-00002-of-00026.safetensors",
806
+ "model.visual.blocks.16.norm2.weight": "model-00002-of-00026.safetensors",
807
+ "model.visual.blocks.17.attn.proj.bias": "model-00002-of-00026.safetensors",
808
+ "model.visual.blocks.17.attn.proj.weight": "model-00002-of-00026.safetensors",
809
+ "model.visual.blocks.17.attn.qkv.bias": "model-00002-of-00026.safetensors",
810
+ "model.visual.blocks.17.attn.qkv.weight": "model-00002-of-00026.safetensors",
811
+ "model.visual.blocks.17.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
812
+ "model.visual.blocks.17.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
813
+ "model.visual.blocks.17.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
814
+ "model.visual.blocks.17.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
815
+ "model.visual.blocks.17.norm1.bias": "model-00002-of-00026.safetensors",
816
+ "model.visual.blocks.17.norm1.weight": "model-00002-of-00026.safetensors",
817
+ "model.visual.blocks.17.norm2.bias": "model-00002-of-00026.safetensors",
818
+ "model.visual.blocks.17.norm2.weight": "model-00002-of-00026.safetensors",
819
+ "model.visual.blocks.18.attn.proj.bias": "model-00002-of-00026.safetensors",
820
+ "model.visual.blocks.18.attn.proj.weight": "model-00002-of-00026.safetensors",
821
+ "model.visual.blocks.18.attn.qkv.bias": "model-00002-of-00026.safetensors",
822
+ "model.visual.blocks.18.attn.qkv.weight": "model-00002-of-00026.safetensors",
823
+ "model.visual.blocks.18.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
824
+ "model.visual.blocks.18.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
825
+ "model.visual.blocks.18.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
826
+ "model.visual.blocks.18.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
827
+ "model.visual.blocks.18.norm1.bias": "model-00002-of-00026.safetensors",
828
+ "model.visual.blocks.18.norm1.weight": "model-00002-of-00026.safetensors",
829
+ "model.visual.blocks.18.norm2.bias": "model-00002-of-00026.safetensors",
830
+ "model.visual.blocks.18.norm2.weight": "model-00002-of-00026.safetensors",
831
+ "model.visual.blocks.19.attn.proj.bias": "model-00002-of-00026.safetensors",
832
+ "model.visual.blocks.19.attn.proj.weight": "model-00002-of-00026.safetensors",
833
+ "model.visual.blocks.19.attn.qkv.bias": "model-00002-of-00026.safetensors",
834
+ "model.visual.blocks.19.attn.qkv.weight": "model-00002-of-00026.safetensors",
835
+ "model.visual.blocks.19.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
836
+ "model.visual.blocks.19.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
837
+ "model.visual.blocks.19.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
838
+ "model.visual.blocks.19.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
839
+ "model.visual.blocks.19.norm1.bias": "model-00002-of-00026.safetensors",
840
+ "model.visual.blocks.19.norm1.weight": "model-00002-of-00026.safetensors",
841
+ "model.visual.blocks.19.norm2.bias": "model-00002-of-00026.safetensors",
842
+ "model.visual.blocks.19.norm2.weight": "model-00002-of-00026.safetensors",
843
+ "model.visual.blocks.2.attn.proj.bias": "model-00002-of-00026.safetensors",
844
+ "model.visual.blocks.2.attn.proj.weight": "model-00002-of-00026.safetensors",
845
+ "model.visual.blocks.2.attn.qkv.bias": "model-00002-of-00026.safetensors",
846
+ "model.visual.blocks.2.attn.qkv.weight": "model-00002-of-00026.safetensors",
847
+ "model.visual.blocks.2.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
848
+ "model.visual.blocks.2.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
849
+ "model.visual.blocks.2.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
850
+ "model.visual.blocks.2.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
851
+ "model.visual.blocks.2.norm1.bias": "model-00002-of-00026.safetensors",
852
+ "model.visual.blocks.2.norm1.weight": "model-00002-of-00026.safetensors",
853
+ "model.visual.blocks.2.norm2.bias": "model-00002-of-00026.safetensors",
854
+ "model.visual.blocks.2.norm2.weight": "model-00002-of-00026.safetensors",
855
+ "model.visual.blocks.20.attn.proj.bias": "model-00002-of-00026.safetensors",
856
+ "model.visual.blocks.20.attn.proj.weight": "model-00002-of-00026.safetensors",
857
+ "model.visual.blocks.20.attn.qkv.bias": "model-00002-of-00026.safetensors",
858
+ "model.visual.blocks.20.attn.qkv.weight": "model-00002-of-00026.safetensors",
859
+ "model.visual.blocks.20.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
860
+ "model.visual.blocks.20.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
861
+ "model.visual.blocks.20.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
862
+ "model.visual.blocks.20.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
863
+ "model.visual.blocks.20.norm1.bias": "model-00002-of-00026.safetensors",
864
+ "model.visual.blocks.20.norm1.weight": "model-00002-of-00026.safetensors",
865
+ "model.visual.blocks.20.norm2.bias": "model-00002-of-00026.safetensors",
866
+ "model.visual.blocks.20.norm2.weight": "model-00002-of-00026.safetensors",
867
+ "model.visual.blocks.21.attn.proj.bias": "model-00002-of-00026.safetensors",
868
+ "model.visual.blocks.21.attn.proj.weight": "model-00002-of-00026.safetensors",
869
+ "model.visual.blocks.21.attn.qkv.bias": "model-00002-of-00026.safetensors",
870
+ "model.visual.blocks.21.attn.qkv.weight": "model-00002-of-00026.safetensors",
871
+ "model.visual.blocks.21.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
872
+ "model.visual.blocks.21.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
873
+ "model.visual.blocks.21.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
874
+ "model.visual.blocks.21.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
875
+ "model.visual.blocks.21.norm1.bias": "model-00002-of-00026.safetensors",
876
+ "model.visual.blocks.21.norm1.weight": "model-00002-of-00026.safetensors",
877
+ "model.visual.blocks.21.norm2.bias": "model-00002-of-00026.safetensors",
878
+ "model.visual.blocks.21.norm2.weight": "model-00002-of-00026.safetensors",
879
+ "model.visual.blocks.22.attn.proj.bias": "model-00002-of-00026.safetensors",
880
+ "model.visual.blocks.22.attn.proj.weight": "model-00002-of-00026.safetensors",
881
+ "model.visual.blocks.22.attn.qkv.bias": "model-00002-of-00026.safetensors",
882
+ "model.visual.blocks.22.attn.qkv.weight": "model-00002-of-00026.safetensors",
883
+ "model.visual.blocks.22.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
884
+ "model.visual.blocks.22.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
885
+ "model.visual.blocks.22.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
886
+ "model.visual.blocks.22.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
887
+ "model.visual.blocks.22.norm1.bias": "model-00002-of-00026.safetensors",
888
+ "model.visual.blocks.22.norm1.weight": "model-00002-of-00026.safetensors",
889
+ "model.visual.blocks.22.norm2.bias": "model-00002-of-00026.safetensors",
890
+ "model.visual.blocks.22.norm2.weight": "model-00002-of-00026.safetensors",
891
+ "model.visual.blocks.23.attn.proj.bias": "model-00002-of-00026.safetensors",
892
+ "model.visual.blocks.23.attn.proj.weight": "model-00002-of-00026.safetensors",
893
+ "model.visual.blocks.23.attn.qkv.bias": "model-00002-of-00026.safetensors",
894
+ "model.visual.blocks.23.attn.qkv.weight": "model-00002-of-00026.safetensors",
895
+ "model.visual.blocks.23.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
896
+ "model.visual.blocks.23.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
897
+ "model.visual.blocks.23.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
898
+ "model.visual.blocks.23.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
899
+ "model.visual.blocks.23.norm1.bias": "model-00002-of-00026.safetensors",
900
+ "model.visual.blocks.23.norm1.weight": "model-00002-of-00026.safetensors",
901
+ "model.visual.blocks.23.norm2.bias": "model-00002-of-00026.safetensors",
902
+ "model.visual.blocks.23.norm2.weight": "model-00002-of-00026.safetensors",
903
+ "model.visual.blocks.24.attn.proj.bias": "model-00002-of-00026.safetensors",
904
+ "model.visual.blocks.24.attn.proj.weight": "model-00002-of-00026.safetensors",
905
+ "model.visual.blocks.24.attn.qkv.bias": "model-00002-of-00026.safetensors",
906
+ "model.visual.blocks.24.attn.qkv.weight": "model-00002-of-00026.safetensors",
907
+ "model.visual.blocks.24.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
908
+ "model.visual.blocks.24.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
909
+ "model.visual.blocks.24.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
910
+ "model.visual.blocks.24.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
911
+ "model.visual.blocks.24.norm1.bias": "model-00002-of-00026.safetensors",
912
+ "model.visual.blocks.24.norm1.weight": "model-00002-of-00026.safetensors",
913
+ "model.visual.blocks.24.norm2.bias": "model-00002-of-00026.safetensors",
914
+ "model.visual.blocks.24.norm2.weight": "model-00002-of-00026.safetensors",
915
+ "model.visual.blocks.25.attn.proj.bias": "model-00002-of-00026.safetensors",
916
+ "model.visual.blocks.25.attn.proj.weight": "model-00002-of-00026.safetensors",
917
+ "model.visual.blocks.25.attn.qkv.bias": "model-00002-of-00026.safetensors",
918
+ "model.visual.blocks.25.attn.qkv.weight": "model-00002-of-00026.safetensors",
919
+ "model.visual.blocks.25.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
920
+ "model.visual.blocks.25.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
921
+ "model.visual.blocks.25.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
922
+ "model.visual.blocks.25.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
923
+ "model.visual.blocks.25.norm1.bias": "model-00002-of-00026.safetensors",
924
+ "model.visual.blocks.25.norm1.weight": "model-00002-of-00026.safetensors",
925
+ "model.visual.blocks.25.norm2.bias": "model-00002-of-00026.safetensors",
926
+ "model.visual.blocks.25.norm2.weight": "model-00002-of-00026.safetensors",
927
+ "model.visual.blocks.26.attn.proj.bias": "model-00002-of-00026.safetensors",
928
+ "model.visual.blocks.26.attn.proj.weight": "model-00002-of-00026.safetensors",
929
+ "model.visual.blocks.26.attn.qkv.bias": "model-00002-of-00026.safetensors",
930
+ "model.visual.blocks.26.attn.qkv.weight": "model-00002-of-00026.safetensors",
931
+ "model.visual.blocks.26.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
932
+ "model.visual.blocks.26.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
933
+ "model.visual.blocks.26.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
934
+ "model.visual.blocks.26.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
935
+ "model.visual.blocks.26.norm1.bias": "model-00002-of-00026.safetensors",
936
+ "model.visual.blocks.26.norm1.weight": "model-00002-of-00026.safetensors",
937
+ "model.visual.blocks.26.norm2.bias": "model-00002-of-00026.safetensors",
938
+ "model.visual.blocks.26.norm2.weight": "model-00002-of-00026.safetensors",
939
+ "model.visual.blocks.3.attn.proj.bias": "model-00002-of-00026.safetensors",
940
+ "model.visual.blocks.3.attn.proj.weight": "model-00002-of-00026.safetensors",
941
+ "model.visual.blocks.3.attn.qkv.bias": "model-00002-of-00026.safetensors",
942
+ "model.visual.blocks.3.attn.qkv.weight": "model-00002-of-00026.safetensors",
943
+ "model.visual.blocks.3.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
944
+ "model.visual.blocks.3.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
945
+ "model.visual.blocks.3.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
946
+ "model.visual.blocks.3.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
947
+ "model.visual.blocks.3.norm1.bias": "model-00002-of-00026.safetensors",
948
+ "model.visual.blocks.3.norm1.weight": "model-00002-of-00026.safetensors",
949
+ "model.visual.blocks.3.norm2.bias": "model-00002-of-00026.safetensors",
950
+ "model.visual.blocks.3.norm2.weight": "model-00002-of-00026.safetensors",
951
+ "model.visual.blocks.4.attn.proj.bias": "model-00002-of-00026.safetensors",
952
+ "model.visual.blocks.4.attn.proj.weight": "model-00002-of-00026.safetensors",
953
+ "model.visual.blocks.4.attn.qkv.bias": "model-00002-of-00026.safetensors",
954
+ "model.visual.blocks.4.attn.qkv.weight": "model-00002-of-00026.safetensors",
955
+ "model.visual.blocks.4.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
956
+ "model.visual.blocks.4.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
957
+ "model.visual.blocks.4.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
958
+ "model.visual.blocks.4.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
959
+ "model.visual.blocks.4.norm1.bias": "model-00002-of-00026.safetensors",
960
+ "model.visual.blocks.4.norm1.weight": "model-00002-of-00026.safetensors",
961
+ "model.visual.blocks.4.norm2.bias": "model-00002-of-00026.safetensors",
962
+ "model.visual.blocks.4.norm2.weight": "model-00002-of-00026.safetensors",
963
+ "model.visual.blocks.5.attn.proj.bias": "model-00002-of-00026.safetensors",
964
+ "model.visual.blocks.5.attn.proj.weight": "model-00002-of-00026.safetensors",
965
+ "model.visual.blocks.5.attn.qkv.bias": "model-00002-of-00026.safetensors",
966
+ "model.visual.blocks.5.attn.qkv.weight": "model-00002-of-00026.safetensors",
967
+ "model.visual.blocks.5.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
968
+ "model.visual.blocks.5.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
969
+ "model.visual.blocks.5.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
970
+ "model.visual.blocks.5.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
971
+ "model.visual.blocks.5.norm1.bias": "model-00002-of-00026.safetensors",
972
+ "model.visual.blocks.5.norm1.weight": "model-00002-of-00026.safetensors",
973
+ "model.visual.blocks.5.norm2.bias": "model-00002-of-00026.safetensors",
974
+ "model.visual.blocks.5.norm2.weight": "model-00002-of-00026.safetensors",
975
+ "model.visual.blocks.6.attn.proj.bias": "model-00002-of-00026.safetensors",
976
+ "model.visual.blocks.6.attn.proj.weight": "model-00002-of-00026.safetensors",
977
+ "model.visual.blocks.6.attn.qkv.bias": "model-00002-of-00026.safetensors",
978
+ "model.visual.blocks.6.attn.qkv.weight": "model-00002-of-00026.safetensors",
979
+ "model.visual.blocks.6.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
980
+ "model.visual.blocks.6.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
981
+ "model.visual.blocks.6.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
982
+ "model.visual.blocks.6.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
983
+ "model.visual.blocks.6.norm1.bias": "model-00002-of-00026.safetensors",
984
+ "model.visual.blocks.6.norm1.weight": "model-00002-of-00026.safetensors",
985
+ "model.visual.blocks.6.norm2.bias": "model-00002-of-00026.safetensors",
986
+ "model.visual.blocks.6.norm2.weight": "model-00002-of-00026.safetensors",
987
+ "model.visual.blocks.7.attn.proj.bias": "model-00002-of-00026.safetensors",
988
+ "model.visual.blocks.7.attn.proj.weight": "model-00002-of-00026.safetensors",
989
+ "model.visual.blocks.7.attn.qkv.bias": "model-00002-of-00026.safetensors",
990
+ "model.visual.blocks.7.attn.qkv.weight": "model-00002-of-00026.safetensors",
991
+ "model.visual.blocks.7.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
992
+ "model.visual.blocks.7.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
993
+ "model.visual.blocks.7.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
994
+ "model.visual.blocks.7.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
995
+ "model.visual.blocks.7.norm1.bias": "model-00002-of-00026.safetensors",
996
+ "model.visual.blocks.7.norm1.weight": "model-00002-of-00026.safetensors",
997
+ "model.visual.blocks.7.norm2.bias": "model-00002-of-00026.safetensors",
998
+ "model.visual.blocks.7.norm2.weight": "model-00002-of-00026.safetensors",
999
+ "model.visual.blocks.8.attn.proj.bias": "model-00002-of-00026.safetensors",
1000
+ "model.visual.blocks.8.attn.proj.weight": "model-00002-of-00026.safetensors",
1001
+ "model.visual.blocks.8.attn.qkv.bias": "model-00002-of-00026.safetensors",
1002
+ "model.visual.blocks.8.attn.qkv.weight": "model-00002-of-00026.safetensors",
1003
+ "model.visual.blocks.8.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
1004
+ "model.visual.blocks.8.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
1005
+ "model.visual.blocks.8.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
1006
+ "model.visual.blocks.8.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
1007
+ "model.visual.blocks.8.norm1.bias": "model-00002-of-00026.safetensors",
1008
+ "model.visual.blocks.8.norm1.weight": "model-00002-of-00026.safetensors",
1009
+ "model.visual.blocks.8.norm2.bias": "model-00002-of-00026.safetensors",
1010
+ "model.visual.blocks.8.norm2.weight": "model-00002-of-00026.safetensors",
1011
+ "model.visual.blocks.9.attn.proj.bias": "model-00002-of-00026.safetensors",
1012
+ "model.visual.blocks.9.attn.proj.weight": "model-00002-of-00026.safetensors",
1013
+ "model.visual.blocks.9.attn.qkv.bias": "model-00002-of-00026.safetensors",
1014
+ "model.visual.blocks.9.attn.qkv.weight": "model-00002-of-00026.safetensors",
1015
+ "model.visual.blocks.9.mlp.linear_fc1.bias": "model-00002-of-00026.safetensors",
1016
+ "model.visual.blocks.9.mlp.linear_fc1.weight": "model-00002-of-00026.safetensors",
1017
+ "model.visual.blocks.9.mlp.linear_fc2.bias": "model-00002-of-00026.safetensors",
1018
+ "model.visual.blocks.9.mlp.linear_fc2.weight": "model-00002-of-00026.safetensors",
1019
+ "model.visual.blocks.9.norm1.bias": "model-00002-of-00026.safetensors",
1020
+ "model.visual.blocks.9.norm1.weight": "model-00002-of-00026.safetensors",
1021
+ "model.visual.blocks.9.norm2.bias": "model-00002-of-00026.safetensors",
1022
+ "model.visual.blocks.9.norm2.weight": "model-00002-of-00026.safetensors",
1023
+ "model.visual.merger.linear_fc1.bias": "model-00002-of-00026.safetensors",
1024
+ "model.visual.merger.linear_fc1.weight": "model-00001-of-00026.safetensors",
1025
+ "model.visual.merger.linear_fc2.bias": "model-00002-of-00026.safetensors",
1026
+ "model.visual.merger.linear_fc2.weight": "model-00001-of-00026.safetensors",
1027
+ "model.visual.merger.norm.bias": "model-00002-of-00026.safetensors",
1028
+ "model.visual.merger.norm.weight": "model-00002-of-00026.safetensors",
1029
+ "model.visual.patch_embed.proj.bias": "model-00002-of-00026.safetensors",
1030
+ "model.visual.patch_embed.proj.weight": "model-00002-of-00026.safetensors",
1031
+ "model.visual.pos_embed.weight": "model-00002-of-00026.safetensors",
1032
+ "mtp.fc.weight": "model-00026-of-00026.safetensors",
1033
+ "mtp.layers.0.input_layernorm.weight": "model-00026-of-00026.safetensors",
1034
+ "mtp.layers.0.mlp.experts.down_proj": "model-00026-of-00026.safetensors",
1035
+ "mtp.layers.0.mlp.experts.gate_up_proj": "model-00025-of-00026.safetensors",
1036
+ "mtp.layers.0.mlp.gate.weight": "model-00026-of-00026.safetensors",
1037
+ "mtp.layers.0.mlp.shared_expert.down_proj.weight": "model-00026-of-00026.safetensors",
1038
+ "mtp.layers.0.mlp.shared_expert.gate_proj.weight": "model-00026-of-00026.safetensors",
1039
+ "mtp.layers.0.mlp.shared_expert.up_proj.weight": "model-00026-of-00026.safetensors",
1040
+ "mtp.layers.0.mlp.shared_expert_gate.weight": "model-00026-of-00026.safetensors",
1041
+ "mtp.layers.0.post_attention_layernorm.weight": "model-00026-of-00026.safetensors",
1042
+ "mtp.layers.0.self_attn.k_norm.weight": "model-00026-of-00026.safetensors",
1043
+ "mtp.layers.0.self_attn.k_proj.weight": "model-00026-of-00026.safetensors",
1044
+ "mtp.layers.0.self_attn.o_proj.weight": "model-00026-of-00026.safetensors",
1045
+ "mtp.layers.0.self_attn.q_norm.weight": "model-00026-of-00026.safetensors",
1046
+ "mtp.layers.0.self_attn.q_proj.weight": "model-00026-of-00026.safetensors",
1047
+ "mtp.layers.0.self_attn.v_proj.weight": "model-00026-of-00026.safetensors",
1048
+ "mtp.norm.weight": "model-00026-of-00026.safetensors",
1049
+ "mtp.pre_fc_norm_embedding.weight": "model-00026-of-00026.safetensors",
1050
+ "mtp.pre_fc_norm_hidden.weight": "model-00026-of-00026.safetensors"
1051
+ }
1052
+ }
processor_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "image_processor": {
3
+ "data_format": "channels_first",
4
+ "do_convert_rgb": true,
5
+ "do_normalize": true,
6
+ "do_rescale": true,
7
+ "do_resize": true,
8
+ "image_mean": [
9
+ 0.5,
10
+ 0.5,
11
+ 0.5
12
+ ],
13
+ "image_processor_type": "Qwen2VLImageProcessorFast",
14
+ "image_std": [
15
+ 0.5,
16
+ 0.5,
17
+ 0.5
18
+ ],
19
+ "merge_size": 2,
20
+ "patch_size": 16,
21
+ "resample": 3,
22
+ "rescale_factor": 0.00392156862745098,
23
+ "size": {
24
+ "longest_edge": 16777216,
25
+ "shortest_edge": 65536
26
+ },
27
+ "temporal_patch_size": 2
28
+ },
29
+ "processor_class": "Qwen3VLProcessor",
30
+ "video_processor": {
31
+ "data_format": "channels_first",
32
+ "default_to_square": true,
33
+ "do_convert_rgb": true,
34
+ "do_normalize": true,
35
+ "do_rescale": true,
36
+ "do_resize": true,
37
+ "do_sample_frames": true,
38
+ "fps": 2,
39
+ "image_mean": [
40
+ 0.5,
41
+ 0.5,
42
+ 0.5
43
+ ],
44
+ "image_std": [
45
+ 0.5,
46
+ 0.5,
47
+ 0.5
48
+ ],
49
+ "max_frames": 768,
50
+ "merge_size": 2,
51
+ "min_frames": 4,
52
+ "patch_size": 16,
53
+ "resample": 3,
54
+ "rescale_factor": 0.00392156862745098,
55
+ "return_metadata": false,
56
+ "size": {
57
+ "longest_edge": 25165824,
58
+ "shortest_edge": 4096
59
+ },
60
+ "temporal_patch_size": 2,
61
+ "video_processor_type": "Qwen3VLVideoProcessor"
62
+ }
63
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87a7830d63fcf43bf241c3c5242e96e62dd3fdc29224ca26fed8ea333db72de4
3
+ size 19989343
tokenizer_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "audio_bos_token": "<|audio_start|>",
4
+ "audio_eos_token": "<|audio_end|>",
5
+ "audio_token": "<|audio_pad|>",
6
+ "backend": "tokenizers",
7
+ "bos_token": null,
8
+ "clean_up_tokenization_spaces": false,
9
+ "eos_token": "<|im_end|>",
10
+ "errors": "replace",
11
+ "image_token": "<|image_pad|>",
12
+ "is_local": false,
13
+ "model_max_length": 262144,
14
+ "model_specific_special_tokens": {
15
+ "audio_bos_token": "<|audio_start|>",
16
+ "audio_eos_token": "<|audio_end|>",
17
+ "audio_token": "<|audio_pad|>",
18
+ "image_token": "<|image_pad|>",
19
+ "video_token": "<|video_pad|>",
20
+ "vision_bos_token": "<|vision_start|>",
21
+ "vision_eos_token": "<|vision_end|>"
22
+ },
23
+ "pad_token": "<|vision_pad|>",
24
+ "padding_side": "right",
25
+ "pretokenize_regex": "(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\\r\\n\\p{L}\\p{N}]?[\\p{L}\\p{M}]+|\\p{N}| ?[^\\s\\p{L}\\p{M}\\p{N}]+[\\r\\n]*|\\s*[\\r\\n]+|\\s+(?!\\S)|\\s+",
26
+ "processor_class": "Qwen3VLProcessor",
27
+ "split_special_tokens": false,
28
+ "tokenizer_class": "TokenizersBackend",
29
+ "unk_token": null,
30
+ "video_token": "<|video_pad|>",
31
+ "vision_bos_token": "<|vision_start|>",
32
+ "vision_eos_token": "<|vision_end|>",
33
+ "chat_template": "{%- set image_count = namespace(value=0) %}\n{%- set video_count = namespace(value=0) %}\n{%- macro render_content(content, do_vision_count, is_system_content=false) %}\n {%- if content is string %}\n {{- content }}\n {%- elif content is iterable and content is not mapping %}\n {%- for item in content %}\n {%- if 'image' in item or 'image_url' in item or item.type == 'image' %}\n {%- if is_system_content %}\n {{- raise_exception('System message cannot contain images.') }}\n {%- endif %}\n {%- if do_vision_count %}\n {%- set image_count.value = image_count.value + 1 %}\n {%- endif %}\n {%- if add_vision_id %}\n {{- 'Picture ' ~ image_count.value ~ ': ' }}\n {%- endif %}\n {{- '<|vision_start|><|image_pad|><|vision_end|>' }}\n {%- elif 'video' in item or item.type == 'video' %}\n {%- if is_system_content %}\n {{- raise_exception('System message cannot contain videos.') }}\n {%- endif %}\n {%- if do_vision_count %}\n {%- set video_count.value = video_count.value + 1 %}\n {%- endif %}\n {%- if add_vision_id %}\n {{- 'Video ' ~ video_count.value ~ ': ' }}\n {%- endif %}\n {{- '<|vision_start|><|video_pad|><|vision_end|>' }}\n {%- elif 'text' in item %}\n {{- item.text }}\n {%- else %}\n {{- raise_exception('Unexpected item type in content.') }}\n {%- endif %}\n {%- endfor %}\n {%- elif content is none or content is undefined %}\n {{- '' }}\n {%- else %}\n {{- raise_exception('Unexpected content type.') }}\n {%- endif %}\n{%- endmacro %}\n{%- if not messages %}\n {{- raise_exception('No messages provided.') }}\n{%- endif %}\n{%- set num_sys = 0 %}\n{%- set merged_system = '' %}\n{%- if messages[0].role == 'system' or messages[0].role == 'developer' %}\n {%- set first = render_content(messages[0].content, false, true)|trim %}\n {%- if messages|length > 1 and (messages[1].role == 'system' or messages[1].role == 'developer') %}\n {%- set second = render_content(messages[1].content, false, true)|trim %}\n {%- set merged_system = first + '\\n' + second %}\n {%- set num_sys = 2 %}\n {%- else %}\n {%- set merged_system = first %}\n {%- set num_sys = 1 %}\n {%- endif %}\n{%- endif %}\n{%- if tools and tools is iterable and tools is not mapping %}\n {{- '<|im_start|>system\\n' }}\n {{- \"# Tools\\n\\nYou have access to the following functions:\\n\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\" }}\n {{- '\\n\\nIf you choose to call a function ONLY reply in the following format with NO suffix:\\n\\n<tool_call>\\n<function=example_function_name>\\n<parameter=example_parameter_1>\\nvalue_1\\n</parameter>\\n<parameter=example_parameter_2>\\nThis is the value for the second parameter\\nthat can span\\nmultiple lines\\n</parameter>\\n</function>\\n</tool_call>\\n\\n<IMPORTANT>\\nReminder:\\n- Function calls MUST follow the specified format: an inner <function=...></function> block must be nested within <tool_call></tool_call> XML tags\\n- Required parameters MUST be specified\\n- You may provide optional reasoning for your function call in natural language BEFORE the function call, but NOT after\\n- If there is no function call available, answer the question like normal with your current knowledge and do not tell the user about function calls\\n</IMPORTANT>' }}\n {%- if merged_system %}\n {{- '\\n\\n' + merged_system }}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n{%- else %}\n {%- if merged_system %}\n {{- '<|im_start|>system\\n' + merged_system + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}\n{%- for message in messages[::-1] %}\n {%- set index = (messages|length - 1) - loop.index0 %}\n {%- if ns.multi_step_tool and message.role == \"user\" %}\n {%- set content = render_content(message.content, false)|trim %}\n {%- if not(content.startswith('<tool_response>') and content.endswith('</tool_response>')) %}\n {%- set ns.multi_step_tool = false %}\n {%- set ns.last_query_index = index %}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- for message in messages %}\n {%- if loop.index0 >= num_sys and message.role != \"system\" and message.role != \"developer\" %}\n {%- set content = render_content(message.content, true)|trim %}\n {%- if message.role == \"user\" %}\n {{- '<|im_start|>' + message.role + '\\n' + content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {%- set reasoning_content = '' %}\n {%- if message.reasoning_content is string %}\n {%- set reasoning_content = message.reasoning_content %}\n {%- else %}\n {%- if '</think>' in content %}\n {%- set reasoning_content = content.split('</think>')[0].rstrip('\\n').split('<think>')[-1].lstrip('\\n') %}\n {%- set content = content.split('</think>')[-1].lstrip('\\n') %}\n {%- endif %}\n {%- endif %}\n {%- set reasoning_content = reasoning_content|trim %}\n {%- if (preserve_thinking is defined and preserve_thinking is true) or (loop.index0 > ns.last_query_index) %}\n {{- '<|im_start|>' + message.role + '\\n<think>\\n' + reasoning_content + '\\n</think>\\n\\n' + content }}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- if message.tool_calls and message.tool_calls is iterable and message.tool_calls is not mapping %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {%- if loop.first %}\n {%- if content|trim %}\n {{- '\\n\\n<tool_call>\\n<function=' + tool_call.name + '>\\n' }}\n {%- else %}\n {{- '<tool_call>\\n<function=' + tool_call.name + '>\\n' }}\n {%- endif %}\n {%- else %}\n {{- '\\n<tool_call>\\n<function=' + tool_call.name + '>\\n' }}\n {%- endif %}\n {%- if tool_call.arguments is mapping %}\n {%- for args_name in tool_call.arguments %}\n {%- set args_value = tool_call.arguments[args_name] %}\n {{- '<parameter=' + args_name + '>\\n' }}\n {%- set args_value = args_value | tojson | safe if args_value is mapping or (args_value is sequence and args_value is not string) else args_value | string %}\n {{- args_value }}\n {{- '\\n</parameter>\\n' }}\n {%- endfor %}\n {%- endif %}\n {{- '</function>\\n</tool_call>' }}\n {%- endfor %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if loop.previtem and loop.previtem.role != \"tool\" %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- content }}\n {{- '\\n</tool_response>' }}\n {%- if not loop.last and loop.nextitem.role != \"tool\" %}\n {{- '<|im_end|>\\n' }}\n {%- elif loop.last %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n {%- if enable_thinking is defined and enable_thinking is false %}\n {{- '<think>\\n\\n</think>\\n\\n' }}\n {%- else %}\n {{- '<think>\\n' }}\n {%- endif %}\n{%- endif %}\n{#- Unsloth fixes - developer role, tool calling #}"
34
+ }