finish up first set of quants and logs
Browse files- README.md +9 -22
- logs/perplexity-GLM-5.1-IQ2_KL.log +410 -0
- logs/perplexity-GLM-5.1-IQ3_KS.log +412 -0
- logs/perplexity-GLM-5.1-Q8_0.log +397 -0
- logs/perplexity-GLM-5.1-smol-IQ1_KT.log +409 -0
- logs/perplexity-GLM-5.1-smol-IQ2_KS.log +409 -0
README.md
CHANGED
|
@@ -14,19 +14,6 @@ language:
|
|
| 14 |
- zh
|
| 15 |
---
|
| 16 |
|
| 17 |
-
## WIP
|
| 18 |
-
|
| 19 |
-
Most the small ones are done uploading, working on `smol-IQ1_KT` now.
|
| 20 |
-
|
| 21 |
-
- [x] download original bf16 safetensors
|
| 22 |
-
- [x] `convert_hf_to_gguf.py` using mainline llama.cpp
|
| 23 |
-
- [x] quantize `--pure` q8_0 and confirm it looks similar enough to existing GLM-5 model architechture
|
| 24 |
-
- [x] run ik_llama.cpp llama-imatrix against full bf16 to get high quality imatrix
|
| 25 |
-
- [x] upload imatrix so other people can begin quantizing with it as desired
|
| 26 |
-
- [x] quantize/test/release remaining listed quants
|
| 27 |
-
- [ ] experiment some with jukofyorks patch to see if any low 4ish BPW quants seem to align with QAT and give better PPL/KLD
|
| 28 |
-
- [ ] potentially release some larger quants this time
|
| 29 |
-
|
| 30 |
## `ik_llama.cpp` imatrix Quantizations of zai-org/GLM-5.1
|
| 31 |
*NOTE* `ik_llama.cpp` can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
|
| 32 |
|
|
@@ -44,18 +31,18 @@ Finally, I *really* appreciate the support from [aifoundry.org](https://aifoundr
|
|
| 44 |
## Quant Collection
|
| 45 |
Perplexity computed against *wiki.test.raw*. (lower is "better")
|
| 46 |
|
| 47 |
-

|
| 48 |
|
| 49 |
These two are just test quants for baseline perplexity comparison and not available for download here:
|
| 50 |
* `BF16` 1404.406 GiB (16.003 BPW)
|
| 51 |
- PPL TODO
|
| 52 |
-
* `Q8_0`
|
| 53 |
-
- PPL
|
| 54 |
|
| 55 |
*NOTE*: The first split file is much smaller on purpose to only contain metadata, its fine!
|
| 56 |
|
| 57 |
## IQ3_KS 320.216 GiB (3.649 BPW)
|
| 58 |
-
|
| 59 |
|
| 60 |
NOTE: Actual used RAM/VRAM will be about 314.07 GiB despite larger model size reported due to unused blk.78/indexer/nextn tensors.
|
| 61 |
|
|
@@ -125,7 +112,7 @@ numactl -N ${SOCKET} -m ${SOCKET} \
|
|
| 125 |
</details>
|
| 126 |
|
| 127 |
## IQ2_KL 261.988 GiB (2.985 BPW)
|
| 128 |
-
|
| 129 |
|
| 130 |
NOTE: Actual used RAM/VRAM will be about 255.84 GiB despite larger model size reported due to unused blk.78/indexer/nextn tensors.
|
| 131 |
|
|
@@ -195,7 +182,7 @@ numactl -N ${SOCKET} -m ${SOCKET} \
|
|
| 195 |
</details>
|
| 196 |
|
| 197 |
## smol-IQ2_KS 205.738 GiB (2.344 BPW)
|
| 198 |
-
|
| 199 |
|
| 200 |
NOTE: Actual used RAM/VRAM will be about 200 GiB despite larger model size reported due to unused blk.78/indexer/nextn tensors.
|
| 201 |
|
|
@@ -264,8 +251,8 @@ numactl -N ${SOCKET} -m ${SOCKET} \
|
|
| 264 |
|
| 265 |
</details>
|
| 266 |
|
| 267 |
-
## smol-IQ1_KT
|
| 268 |
-
|
| 269 |
|
| 270 |
NOTE: Actual used RAM/VRAM will be about 163.046 GiB despite larger model size reported due to unused blk.78/indexer/nextn tensors.
|
| 271 |
|
|
@@ -410,7 +397,7 @@ jukofyork mentioned useful links for details and experimental modified `q4_K` qu
|
|
| 410 |
* https://github.com/ggml-org/llama.cpp/pull/17064#issuecomment-3528891329
|
| 411 |
* https://github.com/ggml-org/llama.cpp/pull/19460#issuecomment-4200617220
|
| 412 |
|
| 413 |
-
I may try that patch to `quantize_row_q4_0_ref()` to change `const float d = max / -8;` to `-7` similar to how we did Kimi-K2's `Q4_X` quantization type without imatrix on routed experts? Or
|
| 414 |
|
| 415 |
## References
|
| 416 |
* [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp)
|
|
|
|
| 14 |
- zh
|
| 15 |
---
|
| 16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
## `ik_llama.cpp` imatrix Quantizations of zai-org/GLM-5.1
|
| 18 |
*NOTE* `ik_llama.cpp` can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
|
| 19 |
|
|
|
|
| 31 |
## Quant Collection
|
| 32 |
Perplexity computed against *wiki.test.raw*. (lower is "better")
|
| 33 |
|
| 34 |
+
TODO 
|
| 35 |
|
| 36 |
These two are just test quants for baseline perplexity comparison and not available for download here:
|
| 37 |
* `BF16` 1404.406 GiB (16.003 BPW)
|
| 38 |
- PPL TODO
|
| 39 |
+
* `Q8_0` 746.302 GiB (8.504 BPW)
|
| 40 |
+
- PPL over 565 chunks for n_ctx=512 = 2.7257 +/- 0.01498
|
| 41 |
|
| 42 |
*NOTE*: The first split file is much smaller on purpose to only contain metadata, its fine!
|
| 43 |
|
| 44 |
## IQ3_KS 320.216 GiB (3.649 BPW)
|
| 45 |
+
PPL over 565 chunks for n_ctx=512 = 2.8780 +/- 0.01609
|
| 46 |
|
| 47 |
NOTE: Actual used RAM/VRAM will be about 314.07 GiB despite larger model size reported due to unused blk.78/indexer/nextn tensors.
|
| 48 |
|
|
|
|
| 112 |
</details>
|
| 113 |
|
| 114 |
## IQ2_KL 261.988 GiB (2.985 BPW)
|
| 115 |
+
PPL over 565 chunks for n_ctx=512 = 3.1275 +/- 0.01759
|
| 116 |
|
| 117 |
NOTE: Actual used RAM/VRAM will be about 255.84 GiB despite larger model size reported due to unused blk.78/indexer/nextn tensors.
|
| 118 |
|
|
|
|
| 182 |
</details>
|
| 183 |
|
| 184 |
## smol-IQ2_KS 205.738 GiB (2.344 BPW)
|
| 185 |
+
PPL over 565 chunks for n_ctx=512 = 3.8818 +/- 0.02300
|
| 186 |
|
| 187 |
NOTE: Actual used RAM/VRAM will be about 200 GiB despite larger model size reported due to unused blk.78/indexer/nextn tensors.
|
| 188 |
|
|
|
|
| 251 |
|
| 252 |
</details>
|
| 253 |
|
| 254 |
+
## smol-IQ1_KT 169.190 GiB (1.928 BPW)
|
| 255 |
+
PPL over 565 chunks for n_ctx=512 = 4.6654 +/- 0.02830
|
| 256 |
|
| 257 |
NOTE: Actual used RAM/VRAM will be about 163.046 GiB despite larger model size reported due to unused blk.78/indexer/nextn tensors.
|
| 258 |
|
|
|
|
| 397 |
* https://github.com/ggml-org/llama.cpp/pull/17064#issuecomment-3528891329
|
| 398 |
* https://github.com/ggml-org/llama.cpp/pull/19460#issuecomment-4200617220
|
| 399 |
|
| 400 |
+
I may try that patch to `quantize_row_q4_0_ref()` to change `const float d = max / -8;` to `-7` similar to how we did Kimi-K2's `Q4_X` quantization type without imatrix on routed experts? Or try jukofyork's modified q4_K? I'll play around with some low 4ish BPW quants.
|
| 401 |
|
| 402 |
## References
|
| 403 |
* [ik_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp)
|
logs/perplexity-GLM-5.1-IQ2_KL.log
ADDED
|
@@ -0,0 +1,410 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
numactl -N "$SOCKET" -m "$SOCKET" \
|
| 2 |
+
./build/bin/llama-perplexity \
|
| 3 |
+
-m "$model" \
|
| 4 |
+
-f wiki.test.raw \
|
| 5 |
+
--seed 1337 \
|
| 6 |
+
--ctx-size 512 \
|
| 7 |
+
-mla 3 \
|
| 8 |
+
-muge \
|
| 9 |
+
--merge-qkv \
|
| 10 |
+
-ub 4096 -b 4096 \
|
| 11 |
+
--no-mmap \
|
| 12 |
+
--numa numactl \
|
| 13 |
+
--threads 96 \
|
| 14 |
+
--threads-batch 128 \
|
| 15 |
+
--validate-quants
|
| 16 |
+
|
| 17 |
+
SOCKET is set to: 1
|
| 18 |
+
main: build = 4391 (67fc9c5e)
|
| 19 |
+
main: built with cc (Ubuntu 13.3.0-6ubuntu2~24.04.1) 13.3.0 for x86_64-linux-gnu
|
| 20 |
+
main: seed = 1337
|
| 21 |
+
CPU: using device CPU - 0 MiB free
|
| 22 |
+
llama_model_loader: additional 6 GGUFs metadata loaded.
|
| 23 |
+
llama_model_loader: loaded meta data with 62 key-value pairs and 1809 tensors from /mnt/raid/hf/GLM-5.1-GGUF/IQ2_KL/GLM-5.1-IQ2_KL-00001-of-00007.gguf (version GGUF V3 (latest))
|
| 24 |
+
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
|
| 25 |
+
llama_model_loader: - kv 0: general.architecture str = glm-dsa
|
| 26 |
+
llama_model_loader: - kv 1: general.type str = model
|
| 27 |
+
llama_model_loader: - kv 2: general.sampling.top_p f32 = 0.950000
|
| 28 |
+
llama_model_loader: - kv 3: general.sampling.temp f32 = 1.000000
|
| 29 |
+
llama_model_loader: - kv 4: general.name str = GLM 5.1
|
| 30 |
+
llama_model_loader: - kv 5: general.version str = 5.1
|
| 31 |
+
llama_model_loader: - kv 6: general.basename str = GLM
|
| 32 |
+
llama_model_loader: - kv 7: general.size_label str = 256x22B
|
| 33 |
+
llama_model_loader: - kv 8: general.license str = mit
|
| 34 |
+
llama_model_loader: - kv 9: general.tags arr[str,1] = ["text-generation"]
|
| 35 |
+
llama_model_loader: - kv 10: general.languages arr[str,2] = ["en", "zh"]
|
| 36 |
+
llama_model_loader: - kv 11: glm-dsa.block_count u32 = 79
|
| 37 |
+
llama_model_loader: - kv 12: glm-dsa.context_length u32 = 202752
|
| 38 |
+
llama_model_loader: - kv 13: glm-dsa.embedding_length u32 = 6144
|
| 39 |
+
llama_model_loader: - kv 14: glm-dsa.feed_forward_length u32 = 12288
|
| 40 |
+
llama_model_loader: - kv 15: glm-dsa.attention.head_count u32 = 64
|
| 41 |
+
llama_model_loader: - kv 16: glm-dsa.attention.head_count_kv u32 = 1
|
| 42 |
+
llama_model_loader: - kv 17: glm-dsa.rope.freq_base f32 = 1000000.000000
|
| 43 |
+
llama_model_loader: - kv 18: glm-dsa.attention.layer_norm_rms_epsilon f32 = 0.000010
|
| 44 |
+
llama_model_loader: - kv 19: glm-dsa.expert_used_count u32 = 8
|
| 45 |
+
llama_model_loader: - kv 20: glm-dsa.expert_group_count u32 = 1
|
| 46 |
+
llama_model_loader: - kv 21: glm-dsa.expert_group_used_count u32 = 1
|
| 47 |
+
llama_model_loader: - kv 22: glm-dsa.expert_gating_func u32 = 2
|
| 48 |
+
llama_model_loader: - kv 23: general.file_type u32 = 155
|
| 49 |
+
llama_model_loader: - kv 24: glm-dsa.leading_dense_block_count u32 = 3
|
| 50 |
+
llama_model_loader: - kv 25: glm-dsa.vocab_size u32 = 154880
|
| 51 |
+
llama_model_loader: - kv 26: glm-dsa.attention.q_lora_rank u32 = 2048
|
| 52 |
+
llama_model_loader: - kv 27: glm-dsa.attention.kv_lora_rank u32 = 512
|
| 53 |
+
llama_model_loader: - kv 28: glm-dsa.attention.key_length u32 = 576
|
| 54 |
+
llama_model_loader: - kv 29: glm-dsa.attention.value_length u32 = 512
|
| 55 |
+
llama_model_loader: - kv 30: glm-dsa.attention.key_length_mla u32 = 256
|
| 56 |
+
llama_model_loader: - kv 31: glm-dsa.attention.value_length_mla u32 = 256
|
| 57 |
+
llama_model_loader: - kv 32: glm-dsa.expert_feed_forward_length u32 = 2048
|
| 58 |
+
llama_model_loader: - kv 33: glm-dsa.expert_count u32 = 256
|
| 59 |
+
llama_model_loader: - kv 34: glm-dsa.expert_shared_count u32 = 1
|
| 60 |
+
llama_model_loader: - kv 35: glm-dsa.expert_weights_scale f32 = 2.500000
|
| 61 |
+
llama_model_loader: - kv 36: glm-dsa.expert_weights_norm bool = true
|
| 62 |
+
llama_model_loader: - kv 37: glm-dsa.rope.dimension_count u32 = 64
|
| 63 |
+
llama_model_loader: - kv 38: glm-dsa.nextn_predict_layers u32 = 1
|
| 64 |
+
llama_model_loader: - kv 39: glm-dsa.attention.indexer.head_count u32 = 32
|
| 65 |
+
llama_model_loader: - kv 40: glm-dsa.attention.indexer.key_length u32 = 128
|
| 66 |
+
llama_model_loader: - kv 41: glm-dsa.attention.indexer.top_k u32 = 2048
|
| 67 |
+
llama_model_loader: - kv 42: general.quantization_version u32 = 2
|
| 68 |
+
llama_model_loader: - kv 43: tokenizer.ggml.model str = gpt2
|
| 69 |
+
llama_model_loader: - kv 44: tokenizer.ggml.pre str = glm4
|
| 70 |
+
llama_model_loader: - kv 45: tokenizer.ggml.tokens arr[str,154880] = ["!", "\"", "#", "$", "%", "&", "'", ...
|
| 71 |
+
llama_model_loader: - kv 46: tokenizer.ggml.token_type arr[i32,154880] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
|
| 72 |
+
llama_model_loader: - kv 47: tokenizer.ggml.merges arr[str,321649] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
|
| 73 |
+
llama_model_loader: - kv 48: tokenizer.ggml.eos_token_id u32 = 154820
|
| 74 |
+
llama_model_loader: - kv 49: tokenizer.ggml.padding_token_id u32 = 154820
|
| 75 |
+
llama_model_loader: - kv 50: tokenizer.ggml.bos_token_id u32 = 154822
|
| 76 |
+
llama_model_loader: - kv 51: tokenizer.ggml.eot_token_id u32 = 154827
|
| 77 |
+
llama_model_loader: - kv 52: tokenizer.ggml.unknown_token_id u32 = 154820
|
| 78 |
+
llama_model_loader: - kv 53: tokenizer.ggml.eom_token_id u32 = 154829
|
| 79 |
+
llama_model_loader: - kv 54: tokenizer.chat_template str = [gMASK]<sop>\n{%- if tools -%}\n{%- mac...
|
| 80 |
+
llama_model_loader: - kv 55: quantize.imatrix.file str = /mnt/data/models/ubergarm/GLM-5.1-GGU...
|
| 81 |
+
llama_model_loader: - kv 56: quantize.imatrix.dataset str = ubergarm-imatrix-calibration-corpus-v...
|
| 82 |
+
llama_model_loader: - kv 57: quantize.imatrix.entries_count i32 = 1003
|
| 83 |
+
llama_model_loader: - kv 58: quantize.imatrix.chunks_count i32 = 813
|
| 84 |
+
llama_model_loader: - kv 59: split.no u16 = 0
|
| 85 |
+
llama_model_loader: - kv 60: split.count u16 = 7
|
| 86 |
+
llama_model_loader: - kv 61: split.tensors.count i32 = 1809
|
| 87 |
+
llama_model_loader: - type f32: 630 tensors
|
| 88 |
+
llama_model_loader: - type q8_0: 396 tensors
|
| 89 |
+
llama_model_loader: - type iq4_k: 1 tensors
|
| 90 |
+
llama_model_loader: - type iq6_k: 317 tensors
|
| 91 |
+
llama_model_loader: - type iq5_ks: 240 tensors
|
| 92 |
+
llama_model_loader: - type iq3_ks: 75 tensors
|
| 93 |
+
llama_model_loader: - type iq2_kl: 150 tensors
|
| 94 |
+
load: 0 unused tokens
|
| 95 |
+
load: special_eot_id is not in special_eog_ids - the tokenizer config may be incorrect
|
| 96 |
+
load: special_eom_id is not in special_eog_ids - the tokenizer config may be incorrect
|
| 97 |
+
load: printing all EOG tokens:
|
| 98 |
+
load: - 154820 ('<|endoftext|>')
|
| 99 |
+
load: - 154827 ('<|user|>')
|
| 100 |
+
load: - 154829 ('<|observation|>')
|
| 101 |
+
load: special tokens cache size = 36
|
| 102 |
+
load: token to piece cache size = 0.9811 MB
|
| 103 |
+
llm_load_print_meta: format = GGUF V3 (latest)
|
| 104 |
+
llm_load_print_meta: arch = glm-dsa
|
| 105 |
+
llm_load_print_meta: n_ctx_train = 202752
|
| 106 |
+
llm_load_print_meta: n_embd = 6144
|
| 107 |
+
llm_load_print_meta: n_layer = 79
|
| 108 |
+
llm_load_print_meta: n_head = 64
|
| 109 |
+
llm_load_print_meta: n_head_kv = 64
|
| 110 |
+
llm_load_print_meta: n_rot = 64
|
| 111 |
+
llm_load_print_meta: n_swa = 0
|
| 112 |
+
llm_load_print_meta: n_swa_pattern = 1
|
| 113 |
+
llm_load_print_meta: n_embd_head_k = 256
|
| 114 |
+
llm_load_print_meta: n_embd_head_v = 256
|
| 115 |
+
llm_load_print_meta: n_gqa = 1
|
| 116 |
+
llm_load_print_meta: n_embd_k_gqa = 16384
|
| 117 |
+
llm_load_print_meta: n_embd_v_gqa = 16384
|
| 118 |
+
llm_load_print_meta: f_norm_eps = 0.0e+00
|
| 119 |
+
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
|
| 120 |
+
llm_load_print_meta: f_clamp_kqv = 0.0e+00
|
| 121 |
+
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
|
| 122 |
+
llm_load_print_meta: f_logit_scale = 0.0e+00
|
| 123 |
+
llm_load_print_meta: n_ff = 12288
|
| 124 |
+
llm_load_print_meta: n_expert = 256
|
| 125 |
+
llm_load_print_meta: n_expert_used = 8
|
| 126 |
+
llm_load_print_meta: causal attn = 1
|
| 127 |
+
llm_load_print_meta: pooling type = 0
|
| 128 |
+
llm_load_print_meta: rope type = 0
|
| 129 |
+
llm_load_print_meta: rope scaling = linear
|
| 130 |
+
llm_load_print_meta: freq_base_train = 1000000.0
|
| 131 |
+
llm_load_print_meta: freq_scale_train = 1
|
| 132 |
+
llm_load_print_meta: n_ctx_orig_yarn = 202752
|
| 133 |
+
llm_load_print_meta: rope_finetuned = unknown
|
| 134 |
+
llm_load_print_meta: ssm_d_conv = 0
|
| 135 |
+
llm_load_print_meta: ssm_d_inner = 0
|
| 136 |
+
llm_load_print_meta: ssm_d_state = 0
|
| 137 |
+
llm_load_print_meta: ssm_dt_rank = 0
|
| 138 |
+
llm_load_print_meta: ssm_n_group = 0
|
| 139 |
+
llm_load_print_meta: model type = 744B.A40B
|
| 140 |
+
llm_load_print_meta: model ftype = IQ2_KL - 2.6875 bpw
|
| 141 |
+
llm_load_print_meta: model params = 753.864 B
|
| 142 |
+
llm_load_print_meta: model size = 261.988 GiB (2.985 BPW)
|
| 143 |
+
llm_load_print_meta: repeating layers = 260.756 GiB (2.979 BPW, 751.961 B parameters)
|
| 144 |
+
llm_load_print_meta: general.name = GLM 5.1
|
| 145 |
+
llm_load_print_meta: n_layer_dense_lead = 3
|
| 146 |
+
llm_load_print_meta: n_lora_q = 2048
|
| 147 |
+
llm_load_print_meta: n_lora_kv = 512
|
| 148 |
+
llm_load_print_meta: n_ff_exp = 2048
|
| 149 |
+
llm_load_print_meta: n_expert_shared = 1
|
| 150 |
+
llm_load_print_meta: expert_weights_scale = 2.5
|
| 151 |
+
llm_load_print_meta: expert_weights_norm = 1
|
| 152 |
+
llm_load_print_meta: expert_gating_func = sigmoid
|
| 153 |
+
llm_load_print_meta: rope_yarn_log_mul = 0.0000
|
| 154 |
+
print_info: vocab type = BPE
|
| 155 |
+
print_info: n_vocab = 154880
|
| 156 |
+
print_info: n_merges = 321649
|
| 157 |
+
print_info: BOS token = 154822 '[gMASK]'
|
| 158 |
+
print_info: EOS token = 154820 '<|endoftext|>'
|
| 159 |
+
print_info: EOT token = 154827 '<|user|>'
|
| 160 |
+
print_info: EOM token = 154829 '<|observation|>'
|
| 161 |
+
print_info: UNK token = 154820 '<|endoftext|>'
|
| 162 |
+
print_info: PAD token = 154820 '<|endoftext|>'
|
| 163 |
+
print_info: LF token = 198 'Ċ'
|
| 164 |
+
print_info: FIM PRE token = 154838 '<|code_prefix|>'
|
| 165 |
+
print_info: FIM SUF token = 154840 '<|code_suffix|>'
|
| 166 |
+
print_info: FIM MID token = 154839 '<|code_middle|>'
|
| 167 |
+
print_info: EOG token = 154820 '<|endoftext|>'
|
| 168 |
+
print_info: EOG token = 154827 '<|user|>'
|
| 169 |
+
print_info: EOG token = 154829 '<|observation|>'
|
| 170 |
+
print_info: max token length = 1024
|
| 171 |
+
======================================= HAVE_FANCY_SIMD is defined
|
| 172 |
+
Free memory 0 MiB on device 0 is less the 1024 MiB safety margin
|
| 173 |
+
------------------- Layer sizes:
|
| 174 |
+
Layer 0: 334.94, 4.50, 339.44 672.00 MiB
|
| 175 |
+
Layer 1: 334.94, 4.50, 339.44 672.00 MiB
|
| 176 |
+
Layer 2: 334.94, 4.50, 339.44 672.00 MiB
|
| 177 |
+
Layer 3: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 178 |
+
Layer 4: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 179 |
+
Layer 5: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 180 |
+
Layer 6: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 181 |
+
Layer 7: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 182 |
+
Layer 8: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 183 |
+
Layer 9: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 184 |
+
Layer 10: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 185 |
+
Layer 11: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 186 |
+
Layer 12: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 187 |
+
Layer 13: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 188 |
+
Layer 14: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 189 |
+
Layer 15: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 190 |
+
Layer 16: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 191 |
+
Layer 17: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 192 |
+
Layer 18: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 193 |
+
Layer 19: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 194 |
+
Layer 20: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 195 |
+
Layer 21: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 196 |
+
Layer 22: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 197 |
+
Layer 23: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 198 |
+
Layer 24: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 199 |
+
Layer 25: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 200 |
+
Layer 26: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 201 |
+
Layer 27: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 202 |
+
Layer 28: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 203 |
+
Layer 29: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 204 |
+
Layer 30: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 205 |
+
Layer 31: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 206 |
+
Layer 32: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 207 |
+
Layer 33: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 208 |
+
Layer 34: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 209 |
+
Layer 35: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 210 |
+
Layer 36: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 211 |
+
Layer 37: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 212 |
+
Layer 38: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 213 |
+
Layer 39: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 214 |
+
Layer 40: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 215 |
+
Layer 41: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 216 |
+
Layer 42: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 217 |
+
Layer 43: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 218 |
+
Layer 44: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 219 |
+
Layer 45: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 220 |
+
Layer 46: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 221 |
+
Layer 47: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 222 |
+
Layer 48: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 223 |
+
Layer 49: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 224 |
+
Layer 50: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 225 |
+
Layer 51: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 226 |
+
Layer 52: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 227 |
+
Layer 53: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 228 |
+
Layer 54: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 229 |
+
Layer 55: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 230 |
+
Layer 56: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 231 |
+
Layer 57: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 232 |
+
Layer 58: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 233 |
+
Layer 59: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 234 |
+
Layer 60: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 235 |
+
Layer 61: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 236 |
+
Layer 62: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 237 |
+
Layer 63: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 238 |
+
Layer 64: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 239 |
+
Layer 65: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 240 |
+
Layer 66: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 241 |
+
Layer 67: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 242 |
+
Layer 68: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 243 |
+
Layer 69: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 244 |
+
Layer 70: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 245 |
+
Layer 71: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 246 |
+
Layer 72: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 247 |
+
Layer 73: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 248 |
+
Layer 74: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 249 |
+
Layer 75: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 250 |
+
Layer 76: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 251 |
+
Layer 77: 3515.74, 4.50, 3520.24 512.00 MiB
|
| 252 |
+
Layer 78: 0.00, 4.50, 4.50 96.00 MiB
|
| 253 |
+
Layer 79: 751.52, 1748.00, 2499.52 MiB (output layer)
|
| 254 |
+
--------------------------------------------------------------------------
|
| 255 |
+
Total : 264685.50, 2103.50, 266789.00 MiB
|
| 256 |
+
Free memory 0 MiB on device 0 is less the required compute buffer size 672 MiB
|
| 257 |
+
Memory required for model tensors + cache: 267541 MiB
|
| 258 |
+
Memory available on all devices - compute: 0 MiB
|
| 259 |
+
llm_load_tensors: ggml ctx size = 0.77 MiB
|
| 260 |
+
model has unused tensor blk.78.attn_norm.weight (size = 24576 bytes) -- ignoring
|
| 261 |
+
model has unused tensor blk.78.attn_q_a_norm.weight (size = 8192 bytes) -- ignoring
|
| 262 |
+
model has unused tensor blk.78.attn_kv_a_norm.weight (size = 2048 bytes) -- ignoring
|
| 263 |
+
model has unused tensor blk.78.attn_q_a.weight (size = 10420224 bytes) -- ignoring
|
| 264 |
+
model has unused tensor blk.78.attn_q_b.weight (size = 27787264 bytes) -- ignoring
|
| 265 |
+
model has unused tensor blk.78.attn_kv_a_mqa.weight (size = 3760128 bytes) -- ignoring
|
| 266 |
+
model has unused tensor blk.78.attn_output.weight (size = 83361792 bytes) -- ignoring
|
| 267 |
+
model has unused tensor blk.78.indexer.k_norm.weight (size = 512 bytes) -- ignoring
|
| 268 |
+
model has unused tensor blk.78.indexer.k_norm.bias (size = 512 bytes) -- ignoring
|
| 269 |
+
model has unused tensor blk.78.indexer.proj.weight (size = 208896 bytes) -- ignoring
|
| 270 |
+
model has unused tensor blk.78.indexer.attn_k.weight (size = 835584 bytes) -- ignoring
|
| 271 |
+
model has unused tensor blk.78.indexer.attn_q_b.weight (size = 6946816 bytes) -- ignoring
|
| 272 |
+
model has unused tensor blk.78.ffn_norm.weight (size = 24576 bytes) -- ignoring
|
| 273 |
+
model has unused tensor blk.78.ffn_gate_inp.weight (size = 6291456 bytes) -- ignoring
|
| 274 |
+
model has unused tensor blk.78.exp_probs_b.bias (size = 1024 bytes) -- ignoring
|
| 275 |
+
model has unused tensor blk.78.ffn_gate_exps.weight (size = 2116026368 bytes) -- ignoring
|
| 276 |
+
model has unused tensor blk.78.ffn_down_exps.weight (size = 2120220672 bytes) -- ignoring
|
| 277 |
+
model has unused tensor blk.78.ffn_up_exps.weight (size = 2116026368 bytes) -- ignoring
|
| 278 |
+
model has unused tensor blk.78.ffn_gate_shexp.weight (size = 8265728 bytes) -- ignoring
|
| 279 |
+
model has unused tensor blk.78.ffn_down_shexp.weight (size = 8282112 bytes) -- ignoring
|
| 280 |
+
model has unused tensor blk.78.ffn_up_shexp.weight (size = 8265728 bytes) -- ignoring
|
| 281 |
+
model has unused tensor blk.78.nextn.eh_proj.weight (size = 80216064 bytes) -- ignoring
|
| 282 |
+
model has unused tensor blk.78.nextn.enorm.weight (size = 24576 bytes) -- ignoring
|
| 283 |
+
model has unused tensor blk.78.nextn.hnorm.weight (size = 24576 bytes) -- ignoring
|
| 284 |
+
model has unused tensor blk.78.nextn.shared_head_norm.weight (size = 24576 bytes) -- ignoring
|
| 285 |
+
llm_load_tensors: offloading 0 repeating layers to GPU
|
| 286 |
+
llm_load_tensors: offloaded 0/80 layers to GPU
|
| 287 |
+
llm_load_tensors: CPU buffer size = 261984.39 MiB
|
| 288 |
+
....................................................................................................
|
| 289 |
+
============ llm_prepare_mla: need to compute 79 wkv_b tensors
|
| 290 |
+
================= Adjusted mainline llama.cpp MLA tensors to ik_llama.cpp
|
| 291 |
+
Computed blk.0.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 292 |
+
Computed blk.1.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 293 |
+
Computed blk.2.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 294 |
+
Computed blk.3.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 295 |
+
Computed blk.4.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 296 |
+
Computed blk.5.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 297 |
+
Computed blk.6.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 298 |
+
Computed blk.7.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 299 |
+
Computed blk.8.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 300 |
+
Computed blk.9.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 301 |
+
Computed blk.10.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 302 |
+
Computed blk.11.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 303 |
+
Computed blk.12.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 304 |
+
Computed blk.13.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 305 |
+
Computed blk.14.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 306 |
+
Computed blk.15.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 307 |
+
Computed blk.16.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 308 |
+
Computed blk.17.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 309 |
+
Computed blk.18.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 310 |
+
Computed blk.19.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 311 |
+
Computed blk.20.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 312 |
+
Computed blk.21.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 313 |
+
Computed blk.22.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 314 |
+
Computed blk.23.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 315 |
+
Computed blk.24.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 316 |
+
Computed blk.25.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 317 |
+
Computed blk.26.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 318 |
+
Computed blk.27.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 319 |
+
Computed blk.28.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 320 |
+
Computed blk.29.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 321 |
+
Computed blk.30.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 322 |
+
Computed blk.31.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 323 |
+
Computed blk.32.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 324 |
+
Computed blk.33.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 325 |
+
Computed blk.34.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 326 |
+
Computed blk.35.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 327 |
+
Computed blk.36.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 328 |
+
Computed blk.37.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 329 |
+
Computed blk.38.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 330 |
+
Computed blk.39.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 331 |
+
Computed blk.40.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 332 |
+
Computed blk.41.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 333 |
+
Computed blk.42.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 334 |
+
Computed blk.43.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 335 |
+
Computed blk.44.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 336 |
+
Computed blk.45.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 337 |
+
Computed blk.46.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in bllama_init_from_model: n_ctx = 4096
|
| 338 |
+
llama_init_from_model: n_batch = 4096
|
| 339 |
+
llama_init_from_model: n_ubatch = 4096
|
| 340 |
+
llama_init_from_model: flash_attn = 1
|
| 341 |
+
llama_init_from_model: mla_attn = 3
|
| 342 |
+
llama_init_from_model: attn_max_b = 0
|
| 343 |
+
llama_init_from_model: fused_moe = 1
|
| 344 |
+
llama_init_from_model: grouped er = 0
|
| 345 |
+
llama_init_from_model: fused_up_gate = 1
|
| 346 |
+
llama_init_from_model: fused_mmad = 1
|
| 347 |
+
llama_init_from_model: rope_cache = 0
|
| 348 |
+
llama_init_from_model: graph_reuse = 1
|
| 349 |
+
llama_init_from_model: k_cache_hadam = 0
|
| 350 |
+
llama_init_from_model: v_cache_hadam = 0
|
| 351 |
+
llama_init_from_model: split_mode_graph_scheduling = 0
|
| 352 |
+
llama_init_from_model: reduce_type = f16
|
| 353 |
+
llama_init_from_model: sched_async = 0
|
| 354 |
+
llama_init_from_model: ser = -1, 0
|
| 355 |
+
llama_init_from_model: freq_base = 1000000.0
|
| 356 |
+
llama_init_from_model: freq_scale = 1
|
| 357 |
+
llama_kv_cache_init: CPU KV buffer size = 351.00 MiB
|
| 358 |
+
llama_init_from_model: KV self size = 351.00 MiB, c^KV (f16): 351.00 MiB, kv^T: not used
|
| 359 |
+
llama_init_from_model: CPU output buffer size = 4.73 MiB
|
| 360 |
+
llama_init_from_model: CPU compute buffer size = 2516.00 MiB
|
| 361 |
+
llama_init_from_model: graph nodes = 4166
|
| 362 |
+
llama_init_from_model: graph splits = 1
|
| 363 |
+
llama_init_from_model: enabling only_active_experts scheduling
|
| 364 |
+
|
| 365 |
+
system_info: n_threads = 96 (n_threads_batch = 128) / 512 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
|
| 366 |
+
perplexity: tokenizing the input ..
|
| 367 |
+
perplexity: tokenization took 356.343 ms
|
| 368 |
+
perplexity: calculating perplexity over 565 chunks, n_ctx=512, batch_size=4096, n_seq=8
|
| 369 |
+
perplexity: 20.00 seconds per pass - ETA 23.53 minutes
|
| 370 |
+
uffer CPU
|
| 371 |
+
Computed blk.47.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 372 |
+
Computed blk.48.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 373 |
+
Computed blk.49.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 374 |
+
Computed blk.50.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 375 |
+
Computed blk.51.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 376 |
+
Computed blk.52.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 377 |
+
Computed blk.53.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 378 |
+
Computed blk.54.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 379 |
+
Computed blk.55.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 380 |
+
Computed blk.56.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 381 |
+
Computed blk.57.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 382 |
+
Computed blk.58.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 383 |
+
Computed blk.59.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 384 |
+
Computed blk.60.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 385 |
+
Computed blk.61.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 386 |
+
Computed blk.62.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 387 |
+
Computed blk.63.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 388 |
+
Computed blk.64.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 389 |
+
Computed blk.65.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 390 |
+
Computed blk.66.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 391 |
+
Computed blk.67.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 392 |
+
Computed blk.68.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 393 |
+
Computed blk.69.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 394 |
+
Computed blk.70.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 395 |
+
Computed blk.71.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 396 |
+
Computed blk.72.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 397 |
+
Computed blk.73.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 398 |
+
Computed blk.74.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 399 |
+
Computed blk.75.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 400 |
+
Computed blk.76.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 401 |
+
Computed blk.77.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 402 |
+
Computed blk.78.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 403 |
+
[1]1.7122,[2]2.5279,[3]2.1036,[4]1.8500,[5]1.7110,[6]1.6420,[7]1.6288,[8]1.5789,[9]1.6053,[10]1.5702,[11]1.5510,[12]1.5998,[13]1.6134,[14]1.6822,[15]1.7852,[16]1.8940,[17]2.0140,[18]2.1720,[19]2.1661,[20]2.1661,[21]2.2477,[22]2.2604,[23]2.2443,[24]2.2226,[25]2.2165,[26]2.2111,[27]2.2152,[28]2.2460,[29]2.2772,[30]2.3397,[31]2.3977,[32]2.4428,[33]2.4924,[34]2.5255,[35]2.5788,[36]2.6219,[37]2.6485,[38]2.7391,[39]2.7759,[40]2.8157,[41]2.8882,[42]2.8816,[43]2.8993,[44]2.9323,[45]3.0042,[46]3.0592,[47]3.0198,[48]2.9825,[49]2.9527,[50]2.9433,[51]2.9656,[52]2.9944,[53]3.0408,[54]3.0740,[55]3.1047,[56]3.1354,[57]3.1352,[58]3.1607,[59]3.1735,[60]3.2127,[61]3.2492,[62]3.2995,[63]3.3352,[64]3.3658,[65]3.3829,[66]3.3869,[67]3.3693,[68]3.3520,[69]3.3778,[70]3.3659,[71]3.3501,[72]3.3446,[73]3.3527,[74]3.3799,[75]3.3881,[76]3.3496,[77]3.3135,[78]3.2805,[79]3.2522,[80]3.2267,[81]3.2041,[82]3.1910,[83]3.1964,[84]3.1735,[85]3.1711,[86]3.1628,[87]3.1569,[88]3.1439,[89]3.1266,[90]3.1172,[91]3.1028,[92]3.0882,[93]3.0852,[94]3.0792,[95]3.0617,[96]3.0607,[97]3.0709,[98]3.0665,[99]3.0522,[100]3.0329,[101]3.0440,[102]3.0316,[103]3.0248,[104]3.0222,[105]3.0367,[106]3.0616,[107]3.1132,[108]3.1236,[109]3.1329,[110]3.1698,[111]3.1928,[112]3.1680,[113]3.1539,[114]3.1569,[115]3.1526,[116]3.1578,[117]3.1604,[118]3.1649,[119]3.1674,[120]3.1652,[121]3.1564,[122]3.1589,[123]3.1414,[124]3.1397,[125]3.1392,[126]3.1349,[127]3.1341,[128]3.1503,[129]3.1570,[130]3.1591,[131]3.1716,[132]3.1736,[133]3.1700,[134]3.1822,[135]3.2028,[136]3.2003,[137]3.1987,[138]3.1938,[139]3.1816,[140]3.1986,[141]3.1976,[142]3.1938,[143]3.1897,[144]3.1896,[145]3.1862,[146]3.1787,[147]3.1648,[148]3.1601,[149]3.1531,[150]3.1509,[151]3.1420,[152]3.1393,[153]3.1419,[154]3.1385,[155]3.1367,[156]3.1386,[157]3.1395,[158]3.1385,[159]3.1488,[160]3.1578,[161]3.1632,[162]3.1586,[163]3.1488,[164]3.1550,[165]3.1499,[166]3.1500,[167]3.1675,[168]3.1683,[169]3.1932,[170]3.2128,[171]3.2253,[172]3.2449,[173]3.2330,[174]3.2178,[175]3.2030,[176]3.1897,[177]3.1747,[178]3.1588,[179]3.1463,[180]3.1340,[181]3.1306,[182]3.1456,[183]3.1636,[184]3.1897,[185]3.2068,[186]3.2165,[187]3.2372,[188]3.2616,[189]3.2842,[190]3.3011,[191]3.3151,[192]3.3229,[193]3.3293,[194]3.3320,[195]3.3296,[196]3.3350,[197]3.3502,[198]3.3645,[199]3.3637,[200]3.3715,[201]3.3722,[202]3.3770,[203]3.3766,[204]3.3848,[205]3.3933,[206]3.3995,[207]3.4049,[208]3.4063,[209]3.4070,[210]3.4043,[211]3.4070,[212]3.4080,[213]3.4113,[214]3.4161,[215]3.4193,[216]3.4234,[217]3.4269,[218]3.4350,[219]3.4307,[220]3.4278,[221]3.4245,[222]3.4248,[223]3.4225,[224]3.4282,[225]3.4299,[226]3.4351,[227]3.4309,[228]3.4293,[229]3.4207,[230]3.4129,[231]3.4109,[232]3.4112,[233]3.4124,[234]3.4064,[235]3.3965,[236]3.3909,[237]3.3827,[238]3.3851,[239]3.3997,[240]3.4149,[241]3.4284,[242]3.4384,[243]3.4511,[244]3.4638,[245]3.4783,[246]3.4903,[247]3.5050,[248]3.5166,[249]3.5179,[250]3.5174,[251]3.5052,[252]3.4936,[253]3.4845,[254]3.4812,[255]3.4829,[256]3.4830,[257]3.4768,[258]3.4740,[259]3.4647,[260]3.4590,[261]3.4514,[262]3.4461,[263]3.4392,[264]3.4352,[265]3.4303,[266]3.4272,[267]3.4194,[268]3.4123,[269]3.4081,[270]3.4061,[271]3.4032,[272]3.3970,[273]3.3948,[274]3.3855,[275]3.3781,[276]3.3687,[277]3.3606,[278]3.3492,[279]3.3495,[280]3.3527,[281]3.3563,[282]3.3623,[283]3.3683,[284]3.3709,[285]3.3737,[286]3.3808,[287]3.3925,[288]3.3934,[289]3.3945,[290]3.3982,[291]3.4012,[292]3.3969,[293]3.3894,[294]3.3861,[295]3.3857,[296]3.3809,[297]3.3774,[298]3.3751,[299]3.3727,[300]3.3739,[301]3.3742,[302]3.3708,[303]3.3696,[304]3.3670,[305]3.3632,[306]3.3584,[307]3.3613,[308]3.3665,[309]3.3777,[310]3.3670,[311]3.3614,[312]3.3555,[313]3.3547,[314]3.3519,[315]3.3545,[316]3.3543,[317]3.3541,[318]3.3539,[319]3.3514,[320]3.3494,[321]3.3523,[322]3.3542,[323]3.3475,[324]3.3441,[325]3.3441,[326]3.3421,[327]3.3454,[328]3.3435,[329]3.3439,[330]3.3421,[331]3.3384,[332]3.3388,[333]3.3411,[334]3.3449,[335]3.3452,[336]3.3457,[337]3.3453,[338]3.3445,[339]3.3444,[340]3.3471,[341]3.3496,[342]3.3512,[343]3.3567,[344]3.3628,[345]3.3722,[346]3.3709,[347]3.3637,[348]3.3579,[349]3.3542,[350]3.3468,[351]3.3382,[352]3.3347,[353]3.3314,[354]3.3274,[355]3.3235,[356]3.3211,[357]3.3178,[358]3.3139,[359]3.3145,[360]3.3116,[361]3.3056,[362]3.3010,[363]3.2964,[364]3.2955,[365]3.2926,[366]3.2915,[367]3.2879,[368]3.2828,[369]3.2783,[370]3.2774,[371]3.2748,[372]3.2765,[373]3.2769,[374]3.2784,[375]3.2758,[376]3.2715,[377]3.2677,[378]3.2680,[379]3.2720,[380]3.2683,[381]3.2643,[382]3.2602,[383]3.2634,[384]3.2699,[385]3.2748,[386]3.2833,[387]3.2879,[388]3.2941,[389]3.3020,[390]3.3039,[391]3.2948,[392]3.2877,[393]3.2793,[394]3.2784,[395]3.2726,[396]3.2669,[397]3.2603,[398]3.2529,[399]3.2467,[400]3.2393,[401]3.2317,[402]3.2256,[403]3.2178,[404]3.2100,[405]3.2038,[406]3.1957,[407]3.1892,[408]3.1816,[409]3.1756,[410]3.1681,[411]3.1618,[412]3.1568,[413]3.1528,[414]3.1520,[415]3.1492,[416]3.1468,[417]3.1407,[418]3.1343,[419]3.1397,[420]3.1338,[421]3.1314,[422]3.1323,[423]3.1288,[424]3.1239,[425]3.1200,[426]3.1167,[427]3.1146,[428]3.1109,[429]3.1062,[430]3.1023,[431]3.1026,[432]3.0988,[433]3.0935,[434]3.0893,[435]3.0853,[436]3.0795,[437]3.0733,[438]3.0684,[439]3.0674,[440]3.0651,[441]3.0639,[442]3.0589,[443]3.0640,[444]3.0722,[445]3.0694,[446]3.0685,[447]3.0702,[448]3.0727,[449]3.0787,[450]3.0797,[451]3.0820,[452]3.0865,[453]3.0945,[454]3.1004,[455]3.1032,[456]3.1092,[457]3.1071,[458]3.1106,[459]3.1122,[460]3.1190,[461]3.1251,[462]3.1282,[463]3.1286,[464]3.1269,[465]3.1263,[466]3.1322,[467]3.1315,[468]3.1287,[469]3.1343,[470]3.1357,[471]3.1382,[472]3.1413,[473]3.1428,[474]3.1441,[475]3.1456,[476]3.1482,[477]3.1513,[478]3.1536,[479]3.1561,[480]3.1586,[481]3.1618,[482]3.1642,[483]3.1672,[484]3.1637,[485]3.1677,[486]3.1695,[487]3.1763,[488]3.1817,[489]3.1872,[490]3.1871,[491]3.1927,[492]3.1975,[493]3.2008,[494]3.2062,[495]3.2124,[496]3.2125,[497]3.2137,[498]3.2159,[499]3.2182,[500]3.2211,[501]3.2219,[502]3.2236,[503]3.2290,[504]3.2344,[505]3.2352,[506]3.2349,[507]3.2369,[508]3.2400,[509]3.2463,[510]3.2495,[511]3.2544,[512]3.2479,[513]3.2443,[514]3.2393,[515]3.2356,[516]3.2329,[517]3.2309,[518]3.2278,[519]3.2225,[520]3.2203,[521]3.2180,[522]3.2134,[523]3.2114,[524]3.2139,[525]3.2112,[526]3.2079,[527]3.2079,[528]3.2063,[529]3.2021,[530]3.1984,[531]3.1954,[532]3.1952,[533]3.1923,[534]3.1908,[535]3.1875,[536]3.1851,[537]3.1792,[538]3.1756,[539]3.1723,[540]3.1724,[541]3.1727,[542]3.1706,[543]3.1705,[544]3.1711,[545]3.1702,[546]3.1708,[547]3.1684,[548]3.1661,[549]3.1623,[550]3.1574,[551]3.1518,[552]3.1476,[553]3.1427,[554]3.1389,[555]3.1339,[556]3.1300,[557]3.1254,[558]3.1252,[559]3.1217,[560]3.1207,[561]3.1223,[562]3.1231,[563]3.1270,[564]3.1294,[565]3.1275,
|
| 404 |
+
llama_print_timings: load time = 51969.03 ms
|
| 405 |
+
llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
|
| 406 |
+
llama_print_timings: prompt eval time = 1299473.13 ms / 289280 tokens ( 4.49 ms per token, 222.61 tokens per second)
|
| 407 |
+
llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
|
| 408 |
+
llama_print_timings: total time = 1310019.16 ms / 289281 tokens
|
| 409 |
+
|
| 410 |
+
Final estimate: PPL over 565 chunks for n_ctx=512 = 3.1275 +/- 0.01759
|
logs/perplexity-GLM-5.1-IQ3_KS.log
ADDED
|
@@ -0,0 +1,412 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
model=/mnt/raid/hf/GLM-5.1-GGUF/IQ3_KS/GLM-5.1-IQ3_KS-00001-of-00008.gguf
|
| 2 |
+
|
| 3 |
+
numactl -N "$SOCKET" -m "$SOCKET" \
|
| 4 |
+
./build/bin/llama-perplexity \
|
| 5 |
+
-m "$model" \
|
| 6 |
+
-f wiki.test.raw \
|
| 7 |
+
--seed 1337 \
|
| 8 |
+
--ctx-size 512 \
|
| 9 |
+
-mla 3 \
|
| 10 |
+
-muge \
|
| 11 |
+
--merge-qkv \
|
| 12 |
+
-ub 4096 -b 4096 \
|
| 13 |
+
--no-mmap \
|
| 14 |
+
--numa numactl \
|
| 15 |
+
--threads 96 \
|
| 16 |
+
--threads-batch 128 \
|
| 17 |
+
--validate-quants
|
| 18 |
+
|
| 19 |
+
SOCKET is set to: 0
|
| 20 |
+
main: build = 4391 (67fc9c5e)
|
| 21 |
+
main: built with cc (Ubuntu 13.3.0-6ubuntu2~24.04.1) 13.3.0 for x86_64-linux-gnu
|
| 22 |
+
main: seed = 1337
|
| 23 |
+
CPU: using device CPU - 0 MiB free
|
| 24 |
+
llama_model_loader: additional 7 GGUFs metadata loaded.
|
| 25 |
+
llama_model_loader: loaded meta data with 62 key-value pairs and 1809 tensors from /mnt/raid/hf/GLM-5.1-GGUF/IQ3_KS/GLM-5.1-IQ3_KS-00001-of-00008.gguf (version GGUF V3 (latest))
|
| 26 |
+
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
|
| 27 |
+
llama_model_loader: - kv 0: general.architecture str = glm-dsa
|
| 28 |
+
llama_model_loader: - kv 1: general.type str = model
|
| 29 |
+
llama_model_loader: - kv 2: general.sampling.top_p f32 = 0.950000
|
| 30 |
+
llama_model_loader: - kv 3: general.sampling.temp f32 = 1.000000
|
| 31 |
+
llama_model_loader: - kv 4: general.name str = GLM 5.1
|
| 32 |
+
llama_model_loader: - kv 5: general.version str = 5.1
|
| 33 |
+
llama_model_loader: - kv 6: general.basename str = GLM
|
| 34 |
+
llama_model_loader: - kv 7: general.size_label str = 256x22B
|
| 35 |
+
llama_model_loader: - kv 8: general.license str = mit
|
| 36 |
+
llama_model_loader: - kv 9: general.tags arr[str,1] = ["text-generation"]
|
| 37 |
+
llama_model_loader: - kv 10: general.languages arr[str,2] = ["en", "zh"]
|
| 38 |
+
llama_model_loader: - kv 11: glm-dsa.block_count u32 = 79
|
| 39 |
+
llama_model_loader: - kv 12: glm-dsa.context_length u32 = 202752
|
| 40 |
+
llama_model_loader: - kv 13: glm-dsa.embedding_length u32 = 6144
|
| 41 |
+
llama_model_loader: - kv 14: glm-dsa.feed_forward_length u32 = 12288
|
| 42 |
+
llama_model_loader: - kv 15: glm-dsa.attention.head_count u32 = 64
|
| 43 |
+
llama_model_loader: - kv 16: glm-dsa.attention.head_count_kv u32 = 1
|
| 44 |
+
llama_model_loader: - kv 17: glm-dsa.rope.freq_base f32 = 1000000.000000
|
| 45 |
+
llama_model_loader: - kv 18: glm-dsa.attention.layer_norm_rms_epsilon f32 = 0.000010
|
| 46 |
+
llama_model_loader: - kv 19: glm-dsa.expert_used_count u32 = 8
|
| 47 |
+
llama_model_loader: - kv 20: glm-dsa.expert_group_count u32 = 1
|
| 48 |
+
llama_model_loader: - kv 21: glm-dsa.expert_group_used_count u32 = 1
|
| 49 |
+
llama_model_loader: - kv 22: glm-dsa.expert_gating_func u32 = 2
|
| 50 |
+
llama_model_loader: - kv 23: general.file_type u32 = 154
|
| 51 |
+
llama_model_loader: - kv 24: glm-dsa.leading_dense_block_count u32 = 3
|
| 52 |
+
llama_model_loader: - kv 25: glm-dsa.vocab_size u32 = 154880
|
| 53 |
+
llama_model_loader: - kv 26: glm-dsa.attention.q_lora_rank u32 = 2048
|
| 54 |
+
llama_model_loader: - kv 27: glm-dsa.attention.kv_lora_rank u32 = 512
|
| 55 |
+
llama_model_loader: - kv 28: glm-dsa.attention.key_length u32 = 576
|
| 56 |
+
llama_model_loader: - kv 29: glm-dsa.attention.value_length u32 = 512
|
| 57 |
+
llama_model_loader: - kv 30: glm-dsa.attention.key_length_mla u32 = 256
|
| 58 |
+
llama_model_loader: - kv 31: glm-dsa.attention.value_length_mla u32 = 256
|
| 59 |
+
llama_model_loader: - kv 32: glm-dsa.expert_feed_forward_length u32 = 2048
|
| 60 |
+
llama_model_loader: - kv 33: glm-dsa.expert_count u32 = 256
|
| 61 |
+
llama_model_loader: - kv 34: glm-dsa.expert_shared_count u32 = 1
|
| 62 |
+
llama_model_loader: - kv 35: glm-dsa.expert_weights_scale f32 = 2.500000
|
| 63 |
+
llama_model_loader: - kv 36: glm-dsa.expert_weights_norm bool = true
|
| 64 |
+
llama_model_loader: - kv 37: glm-dsa.rope.dimension_count u32 = 64
|
| 65 |
+
llama_model_loader: - kv 38: glm-dsa.nextn_predict_layers u32 = 1
|
| 66 |
+
llama_model_loader: - kv 39: glm-dsa.attention.indexer.head_count u32 = 32
|
| 67 |
+
llama_model_loader: - kv 40: glm-dsa.attention.indexer.key_length u32 = 128
|
| 68 |
+
llama_model_loader: - kv 41: glm-dsa.attention.indexer.top_k u32 = 2048
|
| 69 |
+
llama_model_loader: - kv 42: general.quantization_version u32 = 2
|
| 70 |
+
llama_model_loader: - kv 43: tokenizer.ggml.model str = gpt2
|
| 71 |
+
llama_model_loader: - kv 44: tokenizer.ggml.pre str = glm4
|
| 72 |
+
llama_model_loader: - kv 45: tokenizer.ggml.tokens arr[str,154880] = ["!", "\"", "#", "$", "%", "&", "'", ...
|
| 73 |
+
llama_model_loader: - kv 46: tokenizer.ggml.token_type arr[i32,154880] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
|
| 74 |
+
llama_model_loader: - kv 47: tokenizer.ggml.merges arr[str,321649] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
|
| 75 |
+
llama_model_loader: - kv 48: tokenizer.ggml.eos_token_id u32 = 154820
|
| 76 |
+
llama_model_loader: - kv 49: tokenizer.ggml.padding_token_id u32 = 154820
|
| 77 |
+
llama_model_loader: - kv 50: tokenizer.ggml.bos_token_id u32 = 154822
|
| 78 |
+
llama_model_loader: - kv 51: tokenizer.ggml.eot_token_id u32 = 154827
|
| 79 |
+
llama_model_loader: - kv 52: tokenizer.ggml.unknown_token_id u32 = 154820
|
| 80 |
+
llama_model_loader: - kv 53: tokenizer.ggml.eom_token_id u32 = 154829
|
| 81 |
+
llama_model_loader: - kv 54: tokenizer.chat_template str = [gMASK]<sop>\n{%- if tools -%}\n{%- mac...
|
| 82 |
+
llama_model_loader: - kv 55: quantize.imatrix.file str = /mnt/data/models/ubergarm/GLM-5.1-GGU...
|
| 83 |
+
llama_model_loader: - kv 56: quantize.imatrix.dataset str = ubergarm-imatrix-calibration-corpus-v...
|
| 84 |
+
llama_model_loader: - kv 57: quantize.imatrix.entries_count i32 = 1003
|
| 85 |
+
llama_model_loader: - kv 58: quantize.imatrix.chunks_count i32 = 813
|
| 86 |
+
llama_model_loader: - kv 59: split.no u16 = 0
|
| 87 |
+
llama_model_loader: - kv 60: split.count u16 = 8
|
| 88 |
+
llama_model_loader: - kv 61: split.tensors.count i32 = 1809
|
| 89 |
+
llama_model_loader: - type f32: 630 tensors
|
| 90 |
+
llama_model_loader: - type q8_0: 396 tensors
|
| 91 |
+
llama_model_loader: - type iq4_k: 1 tensors
|
| 92 |
+
llama_model_loader: - type iq6_k: 317 tensors
|
| 93 |
+
llama_model_loader: - type iq4_ks: 75 tensors
|
| 94 |
+
llama_model_loader: - type iq5_ks: 240 tensors
|
| 95 |
+
llama_model_loader: - type iq3_ks: 150 tensors
|
| 96 |
+
load: 0 unused tokens
|
| 97 |
+
load: special_eot_id is not in special_eog_ids - the tokenizer config may be incorrect
|
| 98 |
+
load: special_eom_id is not in special_eog_ids - the tokenizer config may be incorrect
|
| 99 |
+
load: printing all EOG tokens:
|
| 100 |
+
load: - 154820 ('<|endoftext|>')
|
| 101 |
+
load: - 154827 ('<|user|>')
|
| 102 |
+
load: - 154829 ('<|observation|>')
|
| 103 |
+
load: special tokens cache size = 36
|
| 104 |
+
load: token to piece cache size = 0.9811 MB
|
| 105 |
+
llm_load_print_meta: format = GGUF V3 (latest)
|
| 106 |
+
llm_load_print_meta: arch = glm-dsa
|
| 107 |
+
llm_load_print_meta: n_ctx_train = 202752
|
| 108 |
+
llm_load_print_meta: n_embd = 6144
|
| 109 |
+
llm_load_print_meta: n_layer = 79
|
| 110 |
+
llm_load_print_meta: n_head = 64
|
| 111 |
+
llm_load_print_meta: n_head_kv = 64
|
| 112 |
+
llm_load_print_meta: n_rot = 64
|
| 113 |
+
llm_load_print_meta: n_swa = 0
|
| 114 |
+
llm_load_print_meta: n_swa_pattern = 1
|
| 115 |
+
llm_load_print_meta: n_embd_head_k = 256
|
| 116 |
+
llm_load_print_meta: n_embd_head_v = 256
|
| 117 |
+
llm_load_print_meta: n_gqa = 1
|
| 118 |
+
llm_load_print_meta: n_embd_k_gqa = 16384
|
| 119 |
+
llm_load_print_meta: n_embd_v_gqa = 16384
|
| 120 |
+
llm_load_print_meta: f_norm_eps = 0.0e+00
|
| 121 |
+
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
|
| 122 |
+
llm_load_print_meta: f_clamp_kqv = 0.0e+00
|
| 123 |
+
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
|
| 124 |
+
llm_load_print_meta: f_logit_scale = 0.0e+00
|
| 125 |
+
llm_load_print_meta: n_ff = 12288
|
| 126 |
+
llm_load_print_meta: n_expert = 256
|
| 127 |
+
llm_load_print_meta: n_expert_used = 8
|
| 128 |
+
llm_load_print_meta: causal attn = 1
|
| 129 |
+
llm_load_print_meta: pooling type = 0
|
| 130 |
+
llm_load_print_meta: rope type = 0
|
| 131 |
+
llm_load_print_meta: rope scaling = linear
|
| 132 |
+
llm_load_print_meta: freq_base_train = 1000000.0
|
| 133 |
+
llm_load_print_meta: freq_scale_train = 1
|
| 134 |
+
llm_load_print_meta: n_ctx_orig_yarn = 202752
|
| 135 |
+
llm_load_print_meta: rope_finetuned = unknown
|
| 136 |
+
llm_load_print_meta: ssm_d_conv = 0
|
| 137 |
+
llm_load_print_meta: ssm_d_inner = 0
|
| 138 |
+
llm_load_print_meta: ssm_d_state = 0
|
| 139 |
+
llm_load_print_meta: ssm_dt_rank = 0
|
| 140 |
+
llm_load_print_meta: ssm_n_group = 0
|
| 141 |
+
llm_load_print_meta: model type = 744B.A40B
|
| 142 |
+
llm_load_print_meta: model ftype = IQ3_KS - 3.1875 bpw
|
| 143 |
+
llm_load_print_meta: model params = 753.864 B
|
| 144 |
+
llm_load_print_meta: model size = 320.216 GiB (3.649 BPW)
|
| 145 |
+
llm_load_print_meta: repeating layers = 318.983 GiB (3.644 BPW, 751.961 B parameters)
|
| 146 |
+
llm_load_print_meta: general.name = GLM 5.1
|
| 147 |
+
llm_load_print_meta: n_layer_dense_lead = 3
|
| 148 |
+
llm_load_print_meta: n_lora_q = 2048
|
| 149 |
+
llm_load_print_meta: n_lora_kv = 512
|
| 150 |
+
llm_load_print_meta: n_ff_exp = 2048
|
| 151 |
+
llm_load_print_meta: n_expert_shared = 1
|
| 152 |
+
llm_load_print_meta: expert_weights_scale = 2.5
|
| 153 |
+
llm_load_print_meta: expert_weights_norm = 1
|
| 154 |
+
llm_load_print_meta: expert_gating_func = sigmoid
|
| 155 |
+
llm_load_print_meta: rope_yarn_log_mul = 0.0000
|
| 156 |
+
print_info: vocab type = BPE
|
| 157 |
+
print_info: n_vocab = 154880
|
| 158 |
+
print_info: n_merges = 321649
|
| 159 |
+
print_info: BOS token = 154822 '[gMASK]'
|
| 160 |
+
print_info: EOS token = 154820 '<|endoftext|>'
|
| 161 |
+
print_info: EOT token = 154827 '<|user|>'
|
| 162 |
+
print_info: EOM token = 154829 '<|observation|>'
|
| 163 |
+
print_info: UNK token = 154820 '<|endoftext|>'
|
| 164 |
+
print_info: PAD token = 154820 '<|endoftext|>'
|
| 165 |
+
print_info: LF token = 198 'Ċ'
|
| 166 |
+
print_info: FIM PRE token = 154838 '<|code_prefix|>'
|
| 167 |
+
print_info: FIM SUF token = 154840 '<|code_suffix|>'
|
| 168 |
+
print_info: FIM MID token = 154839 '<|code_middle|>'
|
| 169 |
+
print_info: EOG token = 154820 '<|endoftext|>'
|
| 170 |
+
print_info: EOG token = 154827 '<|user|>'
|
| 171 |
+
print_info: EOG token = 154829 '<|observation|>'
|
| 172 |
+
print_info: max token length = 1024
|
| 173 |
+
======================================= HAVE_FANCY_SIMD is defined
|
| 174 |
+
Free memory 0 MiB on device 0 is less the 1024 MiB safety margin
|
| 175 |
+
------------------- Layer sizes:
|
| 176 |
+
Layer 0: 334.94, 4.50, 339.44 672.00 MiB
|
| 177 |
+
Layer 1: 334.94, 4.50, 339.44 672.00 MiB
|
| 178 |
+
Layer 2: 334.94, 4.50, 339.44 672.00 MiB
|
| 179 |
+
Layer 3: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 180 |
+
Layer 4: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 181 |
+
Layer 5: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 182 |
+
Layer 6: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 183 |
+
Layer 7: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 184 |
+
Layer 8: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 185 |
+
Layer 9: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 186 |
+
Layer 10: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 187 |
+
Layer 11: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 188 |
+
Layer 12: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 189 |
+
Layer 13: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 190 |
+
Layer 14: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 191 |
+
Layer 15: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 192 |
+
Layer 16: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 193 |
+
Layer 17: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 194 |
+
Layer 18: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 195 |
+
Layer 19: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 196 |
+
Layer 20: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 197 |
+
Layer 21: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 198 |
+
Layer 22: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 199 |
+
Layer 23: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 200 |
+
Layer 24: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 201 |
+
Layer 25: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 202 |
+
Layer 26: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 203 |
+
Layer 27: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 204 |
+
Layer 28: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 205 |
+
Layer 29: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 206 |
+
Layer 30: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 207 |
+
Layer 31: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 208 |
+
Layer 32: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 209 |
+
Layer 33: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 210 |
+
Layer 34: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 211 |
+
Layer 35: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 212 |
+
Layer 36: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 213 |
+
Layer 37: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 214 |
+
Layer 38: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 215 |
+
Layer 39: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 216 |
+
Layer 40: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 217 |
+
Layer 41: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 218 |
+
Layer 42: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 219 |
+
Layer 43: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 220 |
+
Layer 44: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 221 |
+
Layer 45: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 222 |
+
Layer 46: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 223 |
+
Layer 47: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 224 |
+
Layer 48: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 225 |
+
Layer 49: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 226 |
+
Layer 50: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 227 |
+
Layer 51: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 228 |
+
Layer 52: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 229 |
+
Layer 53: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 230 |
+
Layer 54: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 231 |
+
Layer 55: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 232 |
+
Layer 56: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 233 |
+
Layer 57: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 234 |
+
Layer 58: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 235 |
+
Layer 59: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 236 |
+
Layer 60: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 237 |
+
Layer 61: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 238 |
+
Layer 62: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 239 |
+
Layer 63: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 240 |
+
Layer 64: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 241 |
+
Layer 65: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 242 |
+
Layer 66: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 243 |
+
Layer 67: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 244 |
+
Layer 68: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 245 |
+
Layer 69: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 246 |
+
Layer 70: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 247 |
+
Layer 71: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 248 |
+
Layer 72: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 249 |
+
Layer 73: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 250 |
+
Layer 74: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 251 |
+
Layer 75: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 252 |
+
Layer 76: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 253 |
+
Layer 77: 4310.74, 4.50, 4315.24 512.00 MiB
|
| 254 |
+
Layer 78: 0.00, 4.50, 4.50 96.00 MiB
|
| 255 |
+
Layer 79: 751.52, 1748.00, 2499.52 MiB (output layer)
|
| 256 |
+
--------------------------------------------------------------------------
|
| 257 |
+
Total : 324310.50, 2103.50, 326414.00 MiB
|
| 258 |
+
Free memory 0 MiB on device 0 is less the required compute buffer size 672 MiB
|
| 259 |
+
Memory required for model tensors + cache: 327166 MiB
|
| 260 |
+
Memory available on all devices - compute: 0 MiB
|
| 261 |
+
llm_load_tensors: ggml ctx size = 0.77 MiB
|
| 262 |
+
model has unused tensor blk.78.attn_norm.weight (size = 24576 bytes) -- ignoring
|
| 263 |
+
model has unused tensor blk.78.attn_q_a_norm.weight (size = 8192 bytes) -- ignoring
|
| 264 |
+
model has unused tensor blk.78.attn_kv_a_norm.weight (size = 2048 bytes) -- ignoring
|
| 265 |
+
model has unused tensor blk.78.attn_q_a.weight (size = 10420224 bytes) -- ignoring
|
| 266 |
+
model has unused tensor blk.78.attn_q_b.weight (size = 27787264 bytes) -- ignoring
|
| 267 |
+
model has unused tensor blk.78.attn_kv_a_mqa.weight (size = 3760128 bytes) -- ignoring
|
| 268 |
+
model has unused tensor blk.78.attn_output.weight (size = 83361792 bytes) -- ignoring
|
| 269 |
+
model has unused tensor blk.78.indexer.k_norm.weight (size = 512 bytes) -- ignoring
|
| 270 |
+
model has unused tensor blk.78.indexer.k_norm.bias (size = 512 bytes) -- ignoring
|
| 271 |
+
model has unused tensor blk.78.indexer.proj.weight (size = 208896 bytes) -- ignoring
|
| 272 |
+
model has unused tensor blk.78.indexer.attn_k.weight (size = 835584 bytes) -- ignoring
|
| 273 |
+
model has unused tensor blk.78.indexer.attn_q_b.weight (size = 6946816 bytes) -- ignoring
|
| 274 |
+
model has unused tensor blk.78.ffn_norm.weight (size = 24576 bytes) -- ignoring
|
| 275 |
+
model has unused tensor blk.78.ffn_gate_inp.weight (size = 6291456 bytes) -- ignoring
|
| 276 |
+
model has unused tensor blk.78.exp_probs_b.bias (size = 1024 bytes) -- ignoring
|
| 277 |
+
model has unused tensor blk.78.ffn_gate_exps.weight (size = 2116026368 bytes) -- ignoring
|
| 278 |
+
model has unused tensor blk.78.ffn_down_exps.weight (size = 2120220672 bytes) -- ignoring
|
| 279 |
+
model has unused tensor blk.78.ffn_up_exps.weight (size = 2116026368 bytes) -- ignoring
|
| 280 |
+
model has unused tensor blk.78.ffn_gate_shexp.weight (size = 8265728 bytes) -- ignoring
|
| 281 |
+
model has unused tensor blk.78.ffn_down_shexp.weight (size = 8282112 bytes) -- ignoring
|
| 282 |
+
model has unused tensor blk.78.ffn_up_shexp.weight (size = 8265728 bytes) -- ignoring
|
| 283 |
+
model has unused tensor blk.78.nextn.eh_proj.weight (size = 80216064 bytes) -- ignoring
|
| 284 |
+
model has unused tensor blk.78.nextn.enorm.weight (size = 24576 bytes) -- ignoring
|
| 285 |
+
model has unused tensor blk.78.nextn.hnorm.weight (size = 24576 bytes) -- ignoring
|
| 286 |
+
model has unused tensor blk.78.nextn.shared_head_norm.weight (size = 24576 bytes) -- ignoring
|
| 287 |
+
llm_load_tensors: offloading 0 repeating layers to GPU
|
| 288 |
+
llm_load_tensors: offloaded 0/80 layers to GPU
|
| 289 |
+
llm_load_tensors: CPU buffer size = 321609.39 MiB
|
| 290 |
+
....................................................................................................
|
| 291 |
+
============ llm_prepare_mla: need to compute 79 wkv_b tensors
|
| 292 |
+
================= Adjusted mainline llama.cpp MLA tensors to ik_llama.cpp
|
| 293 |
+
Computed blk.0.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 294 |
+
Computed blk.1.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 295 |
+
Computed blk.2.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 296 |
+
Computed blk.3.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 297 |
+
Computed blk.4.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 298 |
+
Computed blk.5.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 299 |
+
Computed blk.6.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 300 |
+
Computed blk.7.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 301 |
+
Computed blk.8.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 302 |
+
Computed blk.9.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 303 |
+
Computed blk.10.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 304 |
+
Computed blk.11.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 305 |
+
Computed blk.12.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 306 |
+
Computed blk.13.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 307 |
+
Computed blk.14.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 308 |
+
Computed blk.15.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 309 |
+
Computed blk.16.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 310 |
+
Computed blk.17.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 311 |
+
Computed blk.18.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 312 |
+
Computed blk.19.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 313 |
+
Computed blk.20.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 314 |
+
Computed blk.21.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 315 |
+
Computed blk.22.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 316 |
+
Computed blk.23.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 317 |
+
Computed blk.24.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 318 |
+
Computed blk.25.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 319 |
+
Computed blk.26.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 320 |
+
Computed blk.27.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 321 |
+
Computed blk.28.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 322 |
+
Computed blk.29.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 323 |
+
Computed blk.30.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 324 |
+
Computed blk.31.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 325 |
+
Computed blk.32.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 326 |
+
Computed blk.33.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 327 |
+
Computed blk.34.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 328 |
+
Computed blk.35.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 329 |
+
Computed blk.36.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 330 |
+
Computed blk.37.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 331 |
+
Computed blk.38.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 332 |
+
Computed blk.39.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 333 |
+
Computed blk.40.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 334 |
+
Computed blk.41.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 335 |
+
Computed blk.42.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 336 |
+
Computed blk.43.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 337 |
+
Computed blk.44.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 338 |
+
Computed blk.45.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 339 |
+
Computed blk.46.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in bllama_init_from_model: n_ctx = 4096
|
| 340 |
+
llama_init_from_model: n_batch = 4096
|
| 341 |
+
llama_init_from_model: n_ubatch = 4096
|
| 342 |
+
llama_init_from_model: flash_attn = 1
|
| 343 |
+
llama_init_from_model: mla_attn = 3
|
| 344 |
+
llama_init_from_model: attn_max_b = 0
|
| 345 |
+
llama_init_from_model: fused_moe = 1
|
| 346 |
+
llama_init_from_model: grouped er = 0
|
| 347 |
+
llama_init_from_model: fused_up_gate = 1
|
| 348 |
+
llama_init_from_model: fused_mmad = 1
|
| 349 |
+
llama_init_from_model: rope_cache = 0
|
| 350 |
+
llama_init_from_model: graph_reuse = 1
|
| 351 |
+
llama_init_from_model: k_cache_hadam = 0
|
| 352 |
+
llama_init_from_model: v_cache_hadam = 0
|
| 353 |
+
llama_init_from_model: split_mode_graph_scheduling = 0
|
| 354 |
+
llama_init_from_model: reduce_type = f16
|
| 355 |
+
llama_init_from_model: sched_async = 0
|
| 356 |
+
llama_init_from_model: ser = -1, 0
|
| 357 |
+
llama_init_from_model: freq_base = 1000000.0
|
| 358 |
+
llama_init_from_model: freq_scale = 1
|
| 359 |
+
llama_kv_cache_init: CPU KV buffer size = 351.00 MiB
|
| 360 |
+
llama_init_from_model: KV self size = 351.00 MiB, c^KV (f16): 351.00 MiB, kv^T: not used
|
| 361 |
+
llama_init_from_model: CPU output buffer size = 4.73 MiB
|
| 362 |
+
llama_init_from_model: CPU compute buffer size = 2516.00 MiB
|
| 363 |
+
llama_init_from_model: graph nodes = 4166
|
| 364 |
+
llama_init_from_model: graph splits = 1
|
| 365 |
+
llama_init_from_model: enabling only_active_experts scheduling
|
| 366 |
+
|
| 367 |
+
system_info: n_threads = 96 (n_threads_batch = 128) / 512 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
|
| 368 |
+
perplexity: tokenizing the input ..
|
| 369 |
+
perplexity: tokenization took 359.387 ms
|
| 370 |
+
perplexity: calculating perplexity over 565 chunks, n_ctx=512, batch_size=4096, n_seq=8
|
| 371 |
+
perplexity: 19.32 seconds per pass - ETA 22.73 minutes
|
| 372 |
+
uffer CPU
|
| 373 |
+
Computed blk.47.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 374 |
+
Computed blk.48.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 375 |
+
Computed blk.49.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 376 |
+
Computed blk.50.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 377 |
+
Computed blk.51.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 378 |
+
Computed blk.52.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 379 |
+
Computed blk.53.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 380 |
+
Computed blk.54.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 381 |
+
Computed blk.55.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 382 |
+
Computed blk.56.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 383 |
+
Computed blk.57.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 384 |
+
Computed blk.58.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 385 |
+
Computed blk.59.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 386 |
+
Computed blk.60.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 387 |
+
Computed blk.61.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 388 |
+
Computed blk.62.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 389 |
+
Computed blk.63.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 390 |
+
Computed blk.64.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 391 |
+
Computed blk.65.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 392 |
+
Computed blk.66.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 393 |
+
Computed blk.67.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 394 |
+
Computed blk.68.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 395 |
+
Computed blk.69.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 396 |
+
Computed blk.70.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 397 |
+
Computed blk.71.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 398 |
+
Computed blk.72.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 399 |
+
Computed blk.73.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 400 |
+
Computed blk.74.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 401 |
+
Computed blk.75.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 402 |
+
Computed blk.76.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 403 |
+
Computed blk.77.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 404 |
+
Computed blk.78.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 405 |
+
[1]1.4331,[2]2.2004,[3]1.8938,[4]1.6767,[5]1.5680,[6]1.5058,[7]1.4925,[8]1.4510,[9]1.4590,[10]1.4351,[11]1.4173,[12]1.4620,[13]1.4598,[14]1.5199,[15]1.6268,[16]1.7334,[17]1.8474,[18]2.0046,[19]1.9972,[20]1.9829,[21]2.0651,[22]2.0783,[23]2.0643,[24]2.0493,[25]2.0403,[26]2.0342,[27]2.0467,[28]2.0759,[29]2.1011,[30]2.1610,[31]2.2134,[32]2.2592,[33]2.3117,[34]2.3456,[35]2.4001,[36]2.4386,[37]2.4675,[38]2.5563,[39]2.5970,[40]2.6424,[41]2.7125,[42]2.7030,[43]2.7198,[44]2.7520,[45]2.8254,[46]2.8789,[47]2.8404,[48]2.8017,[49]2.7767,[50]2.7667,[51]2.7866,[52]2.8179,[53]2.8601,[54]2.8893,[55]2.9185,[56]2.9490,[57]2.9511,[58]2.9773,[59]2.9916,[60]3.0310,[61]3.0696,[62]3.1173,[63]3.1538,[64]3.1797,[65]3.1970,[66]3.1978,[67]3.1750,[68]3.1586,[69]3.1812,[70]3.1703,[71]3.1546,[72]3.1523,[73]3.1555,[74]3.1816,[75]3.1852,[76]3.1490,[77]3.1165,[78]3.0838,[79]3.0536,[80]3.0275,[81]3.0022,[82]2.9891,[83]2.9893,[84]2.9675,[85]2.9598,[86]2.9506,[87]2.9404,[88]2.9250,[89]2.9053,[90]2.8924,[91]2.8715,[92]2.8526,[93]2.8454,[94]2.8319,[95]2.8156,[96]2.8130,[97]2.8238,[98]2.8193,[99]2.8048,[100]2.7844,[101]2.7949,[102]2.7778,[103]2.7655,[104]2.7585,[105]2.7732,[106]2.7984,[107]2.8512,[108]2.8620,[109]2.8736,[110]2.9099,[111]2.9347,[112]2.9119,[113]2.8969,[114]2.9006,[115]2.8984,[116]2.9025,[117]2.9049,[118]2.9099,[119]2.9149,[120]2.9122,[121]2.9034,[122]2.9066,[123]2.8911,[124]2.8906,[125]2.8921,[126]2.8910,[127]2.8908,[128]2.9059,[129]2.9128,[130]2.9148,[131]2.9282,[132]2.9288,[133]2.9258,[134]2.9395,[135]2.9570,[136]2.9507,[137]2.9454,[138]2.9429,[139]2.9312,[140]2.9450,[141]2.9455,[142]2.9389,[143]2.9369,[144]2.9374,[145]2.9358,[146]2.9293,[147]2.9168,[148]2.9125,[149]2.9079,[150]2.9048,[151]2.8960,[152]2.8940,[153]2.8975,[154]2.8960,[155]2.8957,[156]2.8987,[157]2.8998,[158]2.9015,[159]2.9114,[160]2.9207,[161]2.9250,[162]2.9174,[163]2.9086,[164]2.9135,[165]2.9060,[166]2.9032,[167]2.9187,[168]2.9183,[169]2.9435,[170]2.9613,[171]2.9735,[172]2.9920,[173]2.9821,[174]2.9684,[175]2.9550,[176]2.9426,[177]2.9279,[178]2.9133,[179]2.9003,[180]2.8875,[181]2.8826,[182]2.8980,[183]2.9151,[184]2.9400,[185]2.9574,[186]2.9668,[187]2.9867,[188]3.0110,[189]3.0335,[190]3.0500,[191]3.0649,[192]3.0738,[193]3.0807,[194]3.0849,[195]3.0824,[196]3.0883,[197]3.1033,[198]3.1175,[199]3.1177,[200]3.1242,[201]3.1255,[202]3.1292,[203]3.1277,[204]3.1366,[205]3.1446,[206]3.1506,[207]3.1565,[208]3.1568,[209]3.1588,[210]3.1560,[211]3.1595,[212]3.1612,[213]3.1652,[214]3.1704,[215]3.1734,[216]3.1782,[217]3.1826,[218]3.1912,[219]3.1866,[220]3.1852,[221]3.1830,[222]3.1850,[223]3.1842,[224]3.1909,[225]3.1927,[226]3.1987,[227]3.1955,[228]3.1936,[229]3.1864,[230]3.1786,[231]3.1761,[232]3.1775,[233]3.1780,[234]3.1705,[235]3.1604,[236]3.1536,[237]3.1461,[238]3.1490,[239]3.1637,[240]3.1794,[241]3.1927,[242]3.2025,[243]3.2153,[244]3.2281,[245]3.2420,[246]3.2541,[247]3.2690,[248]3.2807,[249]3.2821,[250]3.2830,[251]3.2705,[252]3.2606,[253]3.2522,[254]3.2498,[255]3.2516,[256]3.2510,[257]3.2457,[258]3.2436,[259]3.2332,[260]3.2272,[261]3.2195,[262]3.2132,[263]3.2060,[264]3.2016,[265]3.1976,[266]3.1948,[267]3.1876,[268]3.1808,[269]3.1768,[270]3.1744,[271]3.1720,[272]3.1670,[273]3.1649,[274]3.1563,[275]3.1477,[276]3.1371,[277]3.1272,[278]3.1176,[279]3.1190,[280]3.1222,[281]3.1260,[282]3.1321,[283]3.1377,[284]3.1401,[285]3.1426,[286]3.1499,[287]3.1606,[288]3.1615,[289]3.1631,[290]3.1666,[291]3.1693,[292]3.1644,[293]3.1555,[294]3.1512,[295]3.1511,[296]3.1455,[297]3.1415,[298]3.1389,[299]3.1356,[300]3.1352,[301]3.1345,[302]3.1307,[303]3.1291,[304]3.1256,[305]3.1206,[306]3.1160,[307]3.1180,[308]3.1233,[309]3.1344,[310]3.1239,[311]3.1183,[312]3.1120,[313]3.1095,[314]3.1052,[315]3.1056,[316]3.1043,[317]3.1042,[318]3.1033,[319]3.0997,[320]3.0981,[321]3.1008,[322]3.1022,[323]3.0959,[324]3.0932,[325]3.0928,[326]3.0902,[327]3.0932,[328]3.0907,[329]3.0906,[330]3.0895,[331]3.0859,[332]3.0873,[333]3.0903,[334]3.0934,[335]3.0936,[336]3.0937,[337]3.0945,[338]3.0944,[339]3.0946,[340]3.0972,[341]3.0999,[342]3.1018,[343]3.1072,[344]3.1124,[345]3.1219,[346]3.1218,[347]3.1137,[348]3.1073,[349]3.1026,[350]3.0957,[351]3.0879,[352]3.0844,[353]3.0800,[354]3.0747,[355]3.0697,[356]3.0658,[357]3.0610,[358]3.0565,[359]3.0556,[360]3.0513,[361]3.0448,[362]3.0391,[363]3.0345,[364]3.0332,[365]3.0290,[366]3.0271,[367]3.0224,[368]3.0173,[369]3.0120,[370]3.0109,[371]3.0074,[372]3.0078,[373]3.0074,[374]3.0086,[375]3.0065,[376]3.0022,[377]2.9979,[378]2.9970,[379]2.9997,[380]2.9946,[381]2.9910,[382]2.9874,[383]2.9910,[384]2.9975,[385]3.0025,[386]3.0112,[387]3.0157,[388]3.0223,[389]3.0302,[390]3.0323,[391]3.0246,[392]3.0182,[393]3.0110,[394]3.0103,[395]3.0050,[396]3.0005,[397]2.9940,[398]2.9871,[399]2.9816,[400]2.9750,[401]2.9680,[402]2.9619,[403]2.9551,[404]2.9475,[405]2.9418,[406]2.9343,[407]2.9279,[408]2.9209,[409]2.9151,[410]2.9086,[411]2.9028,[412]2.8985,[413]2.8944,[414]2.8926,[415]2.8897,[416]2.8869,[417]2.8813,[418]2.8751,[419]2.8804,[420]2.8755,[421]2.8733,[422]2.8751,[423]2.8720,[424]2.8670,[425]2.8631,[426]2.8603,[427]2.8582,[428]2.8547,[429]2.8500,[430]2.8464,[431]2.8467,[432]2.8428,[433]2.8381,[434]2.8342,[435]2.8304,[436]2.8249,[437]2.8191,[438]2.8149,[439]2.8138,[440]2.8106,[441]2.8089,[442]2.8048,[443]2.8103,[444]2.8180,[445]2.8154,[446]2.8154,[447]2.8175,[448]2.8200,[449]2.8263,[450]2.8279,[451]2.8303,[452]2.8345,[453]2.8422,[454]2.8480,[455]2.8510,[456]2.8565,[457]2.8548,[458]2.8584,[459]2.8607,[460]2.8673,[461]2.8729,[462]2.8762,[463]2.8766,[464]2.8752,[465]2.8746,[466]2.8803,[467]2.8795,[468]2.8769,[469]2.8829,[470]2.8847,[471]2.8873,[472]2.8909,[473]2.8925,[474]2.8939,[475]2.8958,[476]2.8985,[477]2.9017,[478]2.9042,[479]2.9068,[480]2.9092,[481]2.9123,[482]2.9148,[483]2.9181,[484]2.9150,[485]2.9195,[486]2.9216,[487]2.9281,[488]2.9329,[489]2.9385,[490]2.9379,[491]2.9434,[492]2.9484,[493]2.9523,[494]2.9573,[495]2.9632,[496]2.9632,[497]2.9646,[498]2.9668,[499]2.9690,[500]2.9722,[501]2.9734,[502]2.9751,[503]2.9804,[504]2.9863,[505]2.9872,[506]2.9873,[507]2.9890,[508]2.9925,[509]2.9984,[510]3.0014,[511]3.0063,[512]3.0001,[513]2.9955,[514]2.9904,[515]2.9871,[516]2.9846,[517]2.9824,[518]2.9787,[519]2.9740,[520]2.9727,[521]2.9699,[522]2.9652,[523]2.9622,[524]2.9648,[525]2.9621,[526]2.9591,[527]2.9596,[528]2.9578,[529]2.9542,[530]2.9510,[531]2.9484,[532]2.9473,[533]2.9446,[534]2.9433,[535]2.9406,[536]2.9386,[537]2.9335,[538]2.9298,[539]2.9261,[540]2.9263,[541]2.9265,[542]2.9241,[543]2.9228,[544]2.9226,[545]2.9209,[546]2.9211,[547]2.9185,[548]2.9155,[549]2.9125,[550]2.9079,[551]2.9026,[552]2.8981,[553]2.8936,[554]2.8897,[555]2.8851,[556]2.8813,[557]2.8769,[558]2.8769,[559]2.8734,[560]2.8725,[561]2.8732,[562]2.8744,[563]2.8780,[564]2.8802,[565]2.8780,
|
| 406 |
+
llama_print_timings: load time = 78600.20 ms
|
| 407 |
+
llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
|
| 408 |
+
llama_print_timings: prompt eval time = 1306199.85 ms / 289280 tokens ( 4.52 ms per token, 221.47 tokens per second)
|
| 409 |
+
llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
|
| 410 |
+
llama_print_timings: total time = 1316462.98 ms / 289281 tokens
|
| 411 |
+
|
| 412 |
+
Final estimate: PPL over 565 chunks for n_ctx=512 = 2.8780 +/- 0.01609
|
logs/perplexity-GLM-5.1-Q8_0.log
ADDED
|
@@ -0,0 +1,397 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
model=/mnt/data/models/ubergarm/GLM-5.1-GGUF/GLM-5.1-Q8_0.gguf
|
| 2 |
+
|
| 3 |
+
numactl -N "$SOCKET" -m "$SOCKET" \
|
| 4 |
+
./build/bin/llama-perplexity \
|
| 5 |
+
-m "$model" \
|
| 6 |
+
-f wiki.test.raw \
|
| 7 |
+
--seed 1337 \
|
| 8 |
+
--ctx-size 512 \
|
| 9 |
+
-mla 3 \
|
| 10 |
+
-muge \
|
| 11 |
+
--merge-qkv \
|
| 12 |
+
-ub 4096 -b 4096 \
|
| 13 |
+
--no-mmap \
|
| 14 |
+
--numa numactl \
|
| 15 |
+
--threads 96 \
|
| 16 |
+
--threads-batch 128 \
|
| 17 |
+
--validate-quants
|
| 18 |
+
|
| 19 |
+
SOCKET is set to: 1
|
| 20 |
+
main: build = 4391 (67fc9c5e)
|
| 21 |
+
main: built with cc (Ubuntu 13.3.0-6ubuntu2~24.04.1) 13.3.0 for x86_64-linux-gnu
|
| 22 |
+
main: seed = 1337
|
| 23 |
+
CPU: using device CPU - 0 MiB free
|
| 24 |
+
llama_model_loader: loaded meta data with 55 key-value pairs and 1809 tensors from /mnt/data/models/ubergarm/GLM-5.1-GGUF/GLM-5.1-Q8_0.gguf (version GGUF V3 (latest))
|
| 25 |
+
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
|
| 26 |
+
llama_model_loader: - kv 0: general.architecture str = glm-dsa
|
| 27 |
+
llama_model_loader: - kv 1: general.type str = model
|
| 28 |
+
llama_model_loader: - kv 2: general.sampling.top_p f32 = 0.950000
|
| 29 |
+
llama_model_loader: - kv 3: general.sampling.temp f32 = 1.000000
|
| 30 |
+
llama_model_loader: - kv 4: general.name str = GLM 5.1
|
| 31 |
+
llama_model_loader: - kv 5: general.version str = 5.1
|
| 32 |
+
llama_model_loader: - kv 6: general.basename str = GLM
|
| 33 |
+
llama_model_loader: - kv 7: general.size_label str = 256x22B
|
| 34 |
+
llama_model_loader: - kv 8: general.license str = mit
|
| 35 |
+
llama_model_loader: - kv 9: general.tags arr[str,1] = ["text-generation"]
|
| 36 |
+
llama_model_loader: - kv 10: general.languages arr[str,2] = ["en", "zh"]
|
| 37 |
+
llama_model_loader: - kv 11: glm-dsa.block_count u32 = 79
|
| 38 |
+
llama_model_loader: - kv 12: glm-dsa.context_length u32 = 202752
|
| 39 |
+
llama_model_loader: - kv 13: glm-dsa.embedding_length u32 = 6144
|
| 40 |
+
llama_model_loader: - kv 14: glm-dsa.feed_forward_length u32 = 12288
|
| 41 |
+
llama_model_loader: - kv 15: glm-dsa.attention.head_count u32 = 64
|
| 42 |
+
llama_model_loader: - kv 16: glm-dsa.attention.head_count_kv u32 = 1
|
| 43 |
+
llama_model_loader: - kv 17: glm-dsa.rope.freq_base f32 = 1000000.000000
|
| 44 |
+
llama_model_loader: - kv 18: glm-dsa.attention.layer_norm_rms_epsilon f32 = 0.000010
|
| 45 |
+
llama_model_loader: - kv 19: glm-dsa.expert_used_count u32 = 8
|
| 46 |
+
llama_model_loader: - kv 20: glm-dsa.expert_group_count u32 = 1
|
| 47 |
+
llama_model_loader: - kv 21: glm-dsa.expert_group_used_count u32 = 1
|
| 48 |
+
llama_model_loader: - kv 22: glm-dsa.expert_gating_func u32 = 2
|
| 49 |
+
llama_model_loader: - kv 23: general.file_type u32 = 7
|
| 50 |
+
llama_model_loader: - kv 24: glm-dsa.leading_dense_block_count u32 = 3
|
| 51 |
+
llama_model_loader: - kv 25: glm-dsa.vocab_size u32 = 154880
|
| 52 |
+
llama_model_loader: - kv 26: glm-dsa.attention.q_lora_rank u32 = 2048
|
| 53 |
+
llama_model_loader: - kv 27: glm-dsa.attention.kv_lora_rank u32 = 512
|
| 54 |
+
llama_model_loader: - kv 28: glm-dsa.attention.key_length u32 = 576
|
| 55 |
+
llama_model_loader: - kv 29: glm-dsa.attention.value_length u32 = 512
|
| 56 |
+
llama_model_loader: - kv 30: glm-dsa.attention.key_length_mla u32 = 256
|
| 57 |
+
llama_model_loader: - kv 31: glm-dsa.attention.value_length_mla u32 = 256
|
| 58 |
+
llama_model_loader: - kv 32: glm-dsa.expert_feed_forward_length u32 = 2048
|
| 59 |
+
llama_model_loader: - kv 33: glm-dsa.expert_count u32 = 256
|
| 60 |
+
llama_model_loader: - kv 34: glm-dsa.expert_shared_count u32 = 1
|
| 61 |
+
llama_model_loader: - kv 35: glm-dsa.expert_weights_scale f32 = 2.500000
|
| 62 |
+
llama_model_loader: - kv 36: glm-dsa.expert_weights_norm bool = true
|
| 63 |
+
llama_model_loader: - kv 37: glm-dsa.rope.dimension_count u32 = 64
|
| 64 |
+
llama_model_loader: - kv 38: glm-dsa.nextn_predict_layers u32 = 1
|
| 65 |
+
llama_model_loader: - kv 39: glm-dsa.attention.indexer.head_count u32 = 32
|
| 66 |
+
llama_model_loader: - kv 40: glm-dsa.attention.indexer.key_length u32 = 128
|
| 67 |
+
llama_model_loader: - kv 41: glm-dsa.attention.indexer.top_k u32 = 2048
|
| 68 |
+
llama_model_loader: - kv 42: general.quantization_version u32 = 2
|
| 69 |
+
llama_model_loader: - kv 43: tokenizer.ggml.model str = gpt2
|
| 70 |
+
llama_model_loader: - kv 44: tokenizer.ggml.pre str = glm4
|
| 71 |
+
llama_model_loader: - kv 45: tokenizer.ggml.tokens arr[str,154880] = ["!", "\"", "#", "$", "%", "&", "'", ...
|
| 72 |
+
llama_model_loader: - kv 46: tokenizer.ggml.token_type arr[i32,154880] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
|
| 73 |
+
llama_model_loader: - kv 47: tokenizer.ggml.merges arr[str,321649] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
|
| 74 |
+
llama_model_loader: - kv 48: tokenizer.ggml.eos_token_id u32 = 154820
|
| 75 |
+
llama_model_loader: - kv 49: tokenizer.ggml.padding_token_id u32 = 154820
|
| 76 |
+
llama_model_loader: - kv 50: tokenizer.ggml.bos_token_id u32 = 154822
|
| 77 |
+
llama_model_loader: - kv 51: tokenizer.ggml.eot_token_id u32 = 154827
|
| 78 |
+
llama_model_loader: - kv 52: tokenizer.ggml.unknown_token_id u32 = 154820
|
| 79 |
+
llama_model_loader: - kv 53: tokenizer.ggml.eom_token_id u32 = 154829
|
| 80 |
+
llama_model_loader: - kv 54: tokenizer.chat_template str = [gMASK]<sop>\n{%- if tools -%}\n{%- mac...
|
| 81 |
+
llama_model_loader: - type f32: 630 tensors
|
| 82 |
+
llama_model_loader: - type q8_0: 1179 tensors
|
| 83 |
+
load: 0 unused tokens
|
| 84 |
+
load: special_eot_id is not in special_eog_ids - the tokenizer config may be incorrect
|
| 85 |
+
load: special_eom_id is not in special_eog_ids - the tokenizer config may be incorrect
|
| 86 |
+
load: printing all EOG tokens:
|
| 87 |
+
load: - 154820 ('<|endoftext|>')
|
| 88 |
+
load: - 154827 ('<|user|>')
|
| 89 |
+
load: - 154829 ('<|observation|>')
|
| 90 |
+
load: special tokens cache size = 36
|
| 91 |
+
load: token to piece cache size = 0.9811 MB
|
| 92 |
+
llm_load_print_meta: format = GGUF V3 (latest)
|
| 93 |
+
llm_load_print_meta: arch = glm-dsa
|
| 94 |
+
llm_load_print_meta: n_ctx_train = 202752
|
| 95 |
+
llm_load_print_meta: n_embd = 6144
|
| 96 |
+
llm_load_print_meta: n_layer = 79
|
| 97 |
+
llm_load_print_meta: n_head = 64
|
| 98 |
+
llm_load_print_meta: n_head_kv = 64
|
| 99 |
+
llm_load_print_meta: n_rot = 64
|
| 100 |
+
llm_load_print_meta: n_swa = 0
|
| 101 |
+
llm_load_print_meta: n_swa_pattern = 1
|
| 102 |
+
llm_load_print_meta: n_embd_head_k = 256
|
| 103 |
+
llm_load_print_meta: n_embd_head_v = 256
|
| 104 |
+
llm_load_print_meta: n_gqa = 1
|
| 105 |
+
llm_load_print_meta: n_embd_k_gqa = 16384
|
| 106 |
+
llm_load_print_meta: n_embd_v_gqa = 16384
|
| 107 |
+
llm_load_print_meta: f_norm_eps = 0.0e+00
|
| 108 |
+
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
|
| 109 |
+
llm_load_print_meta: f_clamp_kqv = 0.0e+00
|
| 110 |
+
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
|
| 111 |
+
llm_load_print_meta: f_logit_scale = 0.0e+00
|
| 112 |
+
llm_load_print_meta: n_ff = 12288
|
| 113 |
+
llm_load_print_meta: n_expert = 256
|
| 114 |
+
llm_load_print_meta: n_expert_used = 8
|
| 115 |
+
llm_load_print_meta: causal attn = 1
|
| 116 |
+
llm_load_print_meta: pooling type = 0
|
| 117 |
+
llm_load_print_meta: rope type = 0
|
| 118 |
+
llm_load_print_meta: rope scaling = linear
|
| 119 |
+
llm_load_print_meta: freq_base_train = 1000000.0
|
| 120 |
+
llm_load_print_meta: freq_scale_train = 1
|
| 121 |
+
llm_load_print_meta: n_ctx_orig_yarn = 202752
|
| 122 |
+
llm_load_print_meta: rope_finetuned = unknown
|
| 123 |
+
llm_load_print_meta: ssm_d_conv = 0
|
| 124 |
+
llm_load_print_meta: ssm_d_inner = 0
|
| 125 |
+
llm_load_print_meta: ssm_d_state = 0
|
| 126 |
+
llm_load_print_meta: ssm_dt_rank = 0
|
| 127 |
+
llm_load_print_meta: ssm_n_group = 0
|
| 128 |
+
llm_load_print_meta: model type = 744B.A40B
|
| 129 |
+
llm_load_print_meta: model ftype = Q8_0
|
| 130 |
+
llm_load_print_meta: model params = 753.864 B
|
| 131 |
+
llm_load_print_meta: model size = 746.302 GiB (8.504 BPW)
|
| 132 |
+
llm_load_print_meta: repeating layers = 744.419 GiB (8.504 BPW, 751.961 B parameters)
|
| 133 |
+
llm_load_print_meta: general.name = GLM 5.1
|
| 134 |
+
llm_load_print_meta: n_layer_dense_lead = 3
|
| 135 |
+
llm_load_print_meta: n_lora_q = 2048
|
| 136 |
+
llm_load_print_meta: n_lora_kv = 512
|
| 137 |
+
llm_load_print_meta: n_ff_exp = 2048
|
| 138 |
+
llm_load_print_meta: n_expert_shared = 1
|
| 139 |
+
llm_load_print_meta: expert_weights_scale = 2.5
|
| 140 |
+
llm_load_print_meta: expert_weights_norm = 1
|
| 141 |
+
llm_load_print_meta: expert_gating_func = sigmoid
|
| 142 |
+
llm_load_print_meta: rope_yarn_log_mul = 0.0000
|
| 143 |
+
print_info: vocab type = BPE
|
| 144 |
+
print_info: n_vocab = 154880
|
| 145 |
+
print_info: n_merges = 321649
|
| 146 |
+
print_info: BOS token = 154822 '[gMASK]'
|
| 147 |
+
print_info: EOS token = 154820 '<|endoftext|>'
|
| 148 |
+
print_info: EOT token = 154827 '<|user|>'
|
| 149 |
+
print_info: EOM token = 154829 '<|observation|>'
|
| 150 |
+
print_info: UNK token = 154820 '<|endoftext|>'
|
| 151 |
+
print_info: PAD token = 154820 '<|endoftext|>'
|
| 152 |
+
print_info: LF token = 198 'Ċ'
|
| 153 |
+
print_info: FIM PRE token = 154838 '<|code_prefix|>'
|
| 154 |
+
print_info: FIM SUF token = 154840 '<|code_suffix|>'
|
| 155 |
+
print_info: FIM MID token = 154839 '<|code_middle|>'
|
| 156 |
+
print_info: EOG token = 154820 '<|endoftext|>'
|
| 157 |
+
print_info: EOG token = 154827 '<|user|>'
|
| 158 |
+
print_info: EOG token = 154829 '<|observation|>'
|
| 159 |
+
print_info: max token length = 1024
|
| 160 |
+
======================================= HAVE_FANCY_SIMD is defined
|
| 161 |
+
Free memory 0 MiB on device 0 is less the 1024 MiB safety margin
|
| 162 |
+
------------------- Layer sizes:
|
| 163 |
+
Layer 0: 457.26, 4.50, 461.76 672.00 MiB
|
| 164 |
+
Layer 1: 457.26, 4.50, 461.76 672.00 MiB
|
| 165 |
+
Layer 2: 457.26, 4.50, 461.76 672.00 MiB
|
| 166 |
+
Layer 3: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 167 |
+
Layer 4: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 168 |
+
Layer 5: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 169 |
+
Layer 6: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 170 |
+
Layer 7: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 171 |
+
Layer 8: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 172 |
+
Layer 9: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 173 |
+
Layer 10: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 174 |
+
Layer 11: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 175 |
+
Layer 12: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 176 |
+
Layer 13: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 177 |
+
Layer 14: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 178 |
+
Layer 15: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 179 |
+
Layer 16: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 180 |
+
Layer 17: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 181 |
+
Layer 18: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 182 |
+
Layer 19: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 183 |
+
Layer 20: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 184 |
+
Layer 21: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 185 |
+
Layer 22: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 186 |
+
Layer 23: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 187 |
+
Layer 24: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 188 |
+
Layer 25: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 189 |
+
Layer 26: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 190 |
+
Layer 27: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 191 |
+
Layer 28: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 192 |
+
Layer 29: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 193 |
+
Layer 30: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 194 |
+
Layer 31: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 195 |
+
Layer 32: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 196 |
+
Layer 33: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 197 |
+
Layer 34: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 198 |
+
Layer 35: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 199 |
+
Layer 36: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 200 |
+
Layer 37: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 201 |
+
Layer 38: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 202 |
+
Layer 39: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 203 |
+
Layer 40: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 204 |
+
Layer 41: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 205 |
+
Layer 42: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 206 |
+
Layer 43: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 207 |
+
Layer 44: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 208 |
+
Layer 45: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 209 |
+
Layer 46: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 210 |
+
Layer 47: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 211 |
+
Layer 48: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 212 |
+
Layer 49: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 213 |
+
Layer 50: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 214 |
+
Layer 51: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 215 |
+
Layer 52: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 216 |
+
Layer 53: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 217 |
+
Layer 54: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 218 |
+
Layer 55: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 219 |
+
Layer 56: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 220 |
+
Layer 57: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 221 |
+
Layer 58: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 222 |
+
Layer 59: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 223 |
+
Layer 60: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 224 |
+
Layer 61: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 225 |
+
Layer 62: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 226 |
+
Layer 63: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 227 |
+
Layer 64: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 228 |
+
Layer 65: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 229 |
+
Layer 66: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 230 |
+
Layer 67: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 231 |
+
Layer 68: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 232 |
+
Layer 69: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 233 |
+
Layer 70: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 234 |
+
Layer 71: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 235 |
+
Layer 72: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 236 |
+
Layer 73: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 237 |
+
Layer 74: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 238 |
+
Layer 75: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 239 |
+
Layer 76: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 240 |
+
Layer 77: 10064.02, 4.50, 10068.52 512.00 MiB
|
| 241 |
+
Layer 78: 0.00, 4.50, 4.50 96.00 MiB
|
| 242 |
+
Layer 79: 964.22, 1748.00, 2712.22 MiB (output layer)
|
| 243 |
+
--------------------------------------------------------------------------
|
| 244 |
+
Total : 756172.97, 2103.50, 758276.47 MiB
|
| 245 |
+
Free memory 0 MiB on device 0 is less the required compute buffer size 672 MiB
|
| 246 |
+
Memory required for model tensors + cache: 759241 MiB
|
| 247 |
+
Memory available on all devices - compute: 0 MiB
|
| 248 |
+
llm_load_tensors: ggml ctx size = 0.77 MiB
|
| 249 |
+
model has unused tensor blk.78.attn_norm.weight (size = 24576 bytes) -- ignoring
|
| 250 |
+
model has unused tensor blk.78.attn_q_a_norm.weight (size = 8192 bytes) -- ignoring
|
| 251 |
+
model has unused tensor blk.78.attn_kv_a_norm.weight (size = 2048 bytes) -- ignoring
|
| 252 |
+
model has unused tensor blk.78.attn_q_b.weight (size = 35651584 bytes) -- ignoring
|
| 253 |
+
model has unused tensor blk.78.attn_output.weight (size = 106954752 bytes) -- ignoring
|
| 254 |
+
model has unused tensor blk.78.indexer.k_norm.weight (size = 512 bytes) -- ignoring
|
| 255 |
+
model has unused tensor blk.78.indexer.k_norm.bias (size = 512 bytes) -- ignoring
|
| 256 |
+
model has unused tensor blk.78.indexer.proj.weight (size = 208896 bytes) -- ignoring
|
| 257 |
+
model has unused tensor blk.78.indexer.attn_k.weight (size = 835584 bytes) -- ignoring
|
| 258 |
+
model has unused tensor blk.78.indexer.attn_q_b.weight (size = 8912896 bytes) -- ignoring
|
| 259 |
+
model has unused tensor blk.78.ffn_norm.weight (size = 24576 bytes) -- ignoring
|
| 260 |
+
model has unused tensor blk.78.ffn_gate_inp.weight (size = 6291456 bytes) -- ignoring
|
| 261 |
+
model has unused tensor blk.78.exp_probs_b.bias (size = 1024 bytes) -- ignoring
|
| 262 |
+
model has unused tensor blk.78.ffn_gate_exps.weight (size = 3422552064 bytes) -- ignoring
|
| 263 |
+
model has unused tensor blk.78.ffn_down_exps.weight (size = 3422552064 bytes) -- ignoring
|
| 264 |
+
model has unused tensor blk.78.ffn_up_exps.weight (size = 3422552064 bytes) -- ignoring
|
| 265 |
+
model has unused tensor blk.78.ffn_gate_shexp.weight (size = 13369344 bytes) -- ignoring
|
| 266 |
+
model has unused tensor blk.78.ffn_down_shexp.weight (size = 13369344 bytes) -- ignoring
|
| 267 |
+
model has unused tensor blk.78.ffn_up_shexp.weight (size = 13369344 bytes) -- ignoring
|
| 268 |
+
model has unused tensor blk.78.nextn.eh_proj.weight (size = 80216064 bytes) -- ignoring
|
| 269 |
+
model has unused tensor blk.78.nextn.enorm.weight (size = 24576 bytes) -- ignoring
|
| 270 |
+
model has unused tensor blk.78.nextn.hnorm.weight (size = 24576 bytes) -- ignoring
|
| 271 |
+
model has unused tensor blk.78.nextn.shared_head_norm.weight (size = 24576 bytes) -- ignoring
|
| 272 |
+
llm_load_tensors: offloading 0 repeating layers to GPU
|
| 273 |
+
llm_load_tensors: offloaded 0/80 layers to GPU
|
| 274 |
+
llm_load_tensors: CPU buffer size = 754154.64 MiB
|
| 275 |
+
....................................................................................................
|
| 276 |
+
============ llm_prepare_mla: need to compute 79 wkv_b tensors
|
| 277 |
+
================= Adjusted mainline llama.cpp MLA tensors to ik_llama.cpp
|
| 278 |
+
Computed blk.0.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 279 |
+
Computed blk.1.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 280 |
+
Computed blk.2.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 281 |
+
Computed blk.3.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 282 |
+
Computed blk.4.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 283 |
+
Computed blk.5.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 284 |
+
Computed blk.6.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 285 |
+
Computed blk.7.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 286 |
+
Computed blk.8.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 287 |
+
Computed blk.9.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 288 |
+
Computed blk.10.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 289 |
+
Computed blk.11.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 290 |
+
Computed blk.12.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 291 |
+
Computed blk.13.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 292 |
+
Computed blk.14.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 293 |
+
Computed blk.15.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 294 |
+
Computed blk.16.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 295 |
+
Computed blk.17.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 296 |
+
Computed blk.18.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 297 |
+
Computed blk.19.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 298 |
+
Computed blk.20.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 299 |
+
Computed blk.21.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 300 |
+
Computed blk.22.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 301 |
+
Computed blk.23.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 302 |
+
Computed blk.24.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 303 |
+
Computed blk.25.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 304 |
+
Computed blk.26.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 305 |
+
Computed blk.27.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 306 |
+
Computed blk.28.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 307 |
+
Computed blk.29.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 308 |
+
Computed blk.30.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 309 |
+
Computed blk.31.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 310 |
+
Computed blk.32.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 311 |
+
Computed blk.33.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 312 |
+
Computed blk.34.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 313 |
+
Computed blk.35.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 314 |
+
Computed blk.36.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 315 |
+
Computed blk.37.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 316 |
+
Computed blk.38.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 317 |
+
Computed blk.39.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 318 |
+
Computed blk.40.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 319 |
+
Computed blk.41.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 320 |
+
Computed blk.42.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 321 |
+
Computed blk.43.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 322 |
+
Computed blk.44.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 323 |
+
Computed blk.45.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 324 |
+
Computed blk.46.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in bllama_init_from_model: n_ctx = 4096
|
| 325 |
+
llama_init_from_model: n_batch = 4096
|
| 326 |
+
llama_init_from_model: n_ubatch = 4096
|
| 327 |
+
llama_init_from_model: flash_attn = 1
|
| 328 |
+
llama_init_from_model: mla_attn = 3
|
| 329 |
+
llama_init_from_model: attn_max_b = 0
|
| 330 |
+
llama_init_from_model: fused_moe = 1
|
| 331 |
+
llama_init_from_model: grouped er = 0
|
| 332 |
+
llama_init_from_model: fused_up_gate = 1
|
| 333 |
+
llama_init_from_model: fused_mmad = 1
|
| 334 |
+
llama_init_from_model: rope_cache = 0
|
| 335 |
+
llama_init_from_model: graph_reuse = 1
|
| 336 |
+
llama_init_from_model: k_cache_hadam = 0
|
| 337 |
+
llama_init_from_model: v_cache_hadam = 0
|
| 338 |
+
llama_init_from_model: split_mode_graph_scheduling = 0
|
| 339 |
+
llama_init_from_model: reduce_type = f16
|
| 340 |
+
llama_init_from_model: sched_async = 0
|
| 341 |
+
llama_init_from_model: ser = -1, 0
|
| 342 |
+
llama_init_from_model: freq_base = 1000000.0
|
| 343 |
+
llama_init_from_model: freq_scale = 1
|
| 344 |
+
llama_kv_cache_init: CPU KV buffer size = 351.00 MiB
|
| 345 |
+
llama_init_from_model: KV self size = 351.00 MiB, c^KV (f16): 351.00 MiB, kv^T: not used
|
| 346 |
+
llama_init_from_model: CPU output buffer size = 4.73 MiB
|
| 347 |
+
llama_init_from_model: CPU compute buffer size = 2516.00 MiB
|
| 348 |
+
llama_init_from_model: graph nodes = 4244
|
| 349 |
+
llama_init_from_model: graph splits = 1
|
| 350 |
+
llama_init_from_model: enabling only_active_experts scheduling
|
| 351 |
+
|
| 352 |
+
system_info: n_threads = 96 (n_threads_batch = 128) / 512 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
|
| 353 |
+
perplexity: tokenizing the input ..
|
| 354 |
+
perplexity: tokenization took 359.496 ms
|
| 355 |
+
perplexity: calculating perplexity over 565 chunks, n_ctx=512, batch_size=4096, n_seq=8
|
| 356 |
+
perplexity: 21.32 seconds per pass - ETA 25.08 minutes
|
| 357 |
+
uffer CPU
|
| 358 |
+
Computed blk.47.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 359 |
+
Computed blk.48.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 360 |
+
Computed blk.49.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 361 |
+
Computed blk.50.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 362 |
+
Computed blk.51.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 363 |
+
Computed blk.52.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 364 |
+
Computed blk.53.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 365 |
+
Computed blk.54.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 366 |
+
Computed blk.55.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 367 |
+
Computed blk.56.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 368 |
+
Computed blk.57.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 369 |
+
Computed blk.58.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 370 |
+
Computed blk.59.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 371 |
+
Computed blk.60.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 372 |
+
Computed blk.61.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 373 |
+
Computed blk.62.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 374 |
+
Computed blk.63.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 375 |
+
Computed blk.64.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 376 |
+
Computed blk.65.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 377 |
+
Computed blk.66.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 378 |
+
Computed blk.67.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 379 |
+
Computed blk.68.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 380 |
+
Computed blk.69.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 381 |
+
Computed blk.70.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 382 |
+
Computed blk.71.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 383 |
+
Computed blk.72.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 384 |
+
Computed blk.73.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 385 |
+
Computed blk.74.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 386 |
+
Computed blk.75.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 387 |
+
Computed blk.76.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 388 |
+
Computed blk.77.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 389 |
+
Computed blk.78.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 390 |
+
[1]1.3196,[2]2.0523,[3]1.7723,[4]1.5867,[5]1.4832,[6]1.4261,[7]1.4065,[8]1.3695,[9]1.3695,[10]1.3438,[11]1.3295,[12]1.3788,[13]1.3757,[14]1.4375,[15]1.5409,[16]1.6456,[17]1.7633,[18]1.9061,[19]1.9043,[20]1.9001,[21]1.9769,[22]1.9924,[23]1.9799,[24]1.9645,[25]1.9548,[26]1.9469,[27]1.9573,[28]1.9875,[29]2.0134,[30]2.0704,[31]2.1252,[32]2.1667,[33]2.2150,[34]2.2451,[35]2.2954,[36]2.3339,[37]2.3629,[38]2.4490,[39]2.4871,[40]2.5314,[41]2.6003,[42]2.5934,[43]2.6123,[44]2.6470,[45]2.7187,[46]2.7710,[47]2.7312,[48]2.6919,[49]2.6637,[50]2.6512,[51]2.6710,[52]2.7017,[53]2.7395,[54]2.7685,[55]2.7966,[56]2.8264,[57]2.8296,[58]2.8534,[59]2.8678,[60]2.9057,[61]2.9406,[62]2.9877,[63]3.0234,[64]3.0526,[65]3.0695,[66]3.0672,[67]3.0413,[68]3.0248,[69]3.0474,[70]3.0338,[71]3.0180,[72]3.0158,[73]3.0200,[74]3.0446,[75]3.0473,[76]3.0118,[77]2.9781,[78]2.9456,[79]2.9174,[80]2.8904,[81]2.8673,[82]2.8543,[83]2.8564,[84]2.8313,[85]2.8184,[86]2.8121,[87]2.8008,[88]2.7839,[89]2.7627,[90]2.7500,[91]2.7309,[92]2.7096,[93]2.7017,[94]2.6872,[95]2.6699,[96]2.6632,[97]2.6721,[98]2.6646,[99]2.6490,[100]2.6292,[101]2.6403,[102]2.6232,[103]2.6127,[104]2.6062,[105]2.6175,[106]2.6418,[107]2.6920,[108]2.7030,[109]2.7152,[110]2.7504,[111]2.7746,[112]2.7534,[113]2.7398,[114]2.7417,[115]2.7398,[116]2.7426,[117]2.7442,[118]2.7490,[119]2.7548,[120]2.7529,[121]2.7454,[122]2.7477,[123]2.7338,[124]2.7338,[125]2.7354,[126]2.7355,[127]2.7350,[128]2.7500,[129]2.7570,[130]2.7578,[131]2.7709,[132]2.7729,[133]2.7707,[134]2.7844,[135]2.8032,[136]2.7960,[137]2.7926,[138]2.7902,[139]2.7781,[140]2.7917,[141]2.7931,[142]2.7873,[143]2.7862,[144]2.7874,[145]2.7866,[146]2.7810,[147]2.7669,[148]2.7617,[149]2.7571,[150]2.7540,[151]2.7464,[152]2.7455,[153]2.7492,[154]2.7483,[155]2.7475,[156]2.7510,[157]2.7536,[158]2.7556,[159]2.7655,[160]2.7753,[161]2.7814,[162]2.7730,[163]2.7625,[164]2.7667,[165]2.7579,[166]2.7525,[167]2.7652,[168]2.7654,[169]2.7886,[170]2.8055,[171]2.8166,[172]2.8340,[173]2.8247,[174]2.8120,[175]2.7987,[176]2.7871,[177]2.7736,[178]2.7600,[179]2.7492,[180]2.7373,[181]2.7325,[182]2.7471,[183]2.7650,[184]2.7897,[185]2.8070,[186]2.8170,[187]2.8368,[188]2.8608,[189]2.8822,[190]2.8988,[191]2.9136,[192]2.9222,[193]2.9293,[194]2.9333,[195]2.9314,[196]2.9346,[197]2.9481,[198]2.9622,[199]2.9620,[200]2.9682,[201]2.9704,[202]2.9737,[203]2.9722,[204]2.9812,[205]2.9894,[206]2.9960,[207]3.0024,[208]3.0030,[209]3.0050,[210]3.0017,[211]3.0056,[212]3.0076,[213]3.0112,[214]3.0165,[215]3.0200,[216]3.0252,[217]3.0292,[218]3.0379,[219]3.0327,[220]3.0315,[221]3.0293,[222]3.0312,[223]3.0309,[224]3.0370,[225]3.0389,[226]3.0454,[227]3.0428,[228]3.0417,[229]3.0335,[230]3.0246,[231]3.0208,[232]3.0223,[233]3.0215,[234]3.0144,[235]3.0043,[236]2.9976,[237]2.9897,[238]2.9921,[239]3.0066,[240]3.0213,[241]3.0337,[242]3.0437,[243]3.0562,[244]3.0688,[245]3.0829,[246]3.0943,[247]3.1084,[248]3.1197,[249]3.1214,[250]3.1224,[251]3.1106,[252]3.1012,[253]3.0934,[254]3.0903,[255]3.0922,[256]3.0920,[257]3.0869,[258]3.0848,[259]3.0748,[260]3.0693,[261]3.0623,[262]3.0563,[263]3.0495,[264]3.0455,[265]3.0413,[266]3.0383,[267]3.0309,[268]3.0246,[269]3.0206,[270]3.0188,[271]3.0161,[272]3.0109,[273]3.0089,[274]3.0009,[275]2.9934,[276]2.9831,[277]2.9752,[278]2.9659,[279]2.9669,[280]2.9697,[281]2.9732,[282]2.9784,[283]2.9836,[284]2.9856,[285]2.9878,[286]2.9948,[287]3.0057,[288]3.0068,[289]3.0086,[290]3.0124,[291]3.0155,[292]3.0108,[293]3.0022,[294]2.9960,[295]2.9955,[296]2.9897,[297]2.9860,[298]2.9825,[299]2.9787,[300]2.9776,[301]2.9761,[302]2.9729,[303]2.9712,[304]2.9678,[305]2.9626,[306]2.9581,[307]2.9604,[308]2.9658,[309]2.9767,[310]2.9674,[311]2.9618,[312]2.9552,[313]2.9517,[314]2.9478,[315]2.9476,[316]2.9460,[317]2.9453,[318]2.9443,[319]2.9415,[320]2.9392,[321]2.9422,[322]2.9437,[323]2.9379,[324]2.9343,[325]2.9336,[326]2.9315,[327]2.9339,[328]2.9319,[329]2.9320,[330]2.9312,[331]2.9272,[332]2.9287,[333]2.9314,[334]2.9348,[335]2.9354,[336]2.9360,[337]2.9372,[338]2.9371,[339]2.9376,[340]2.9402,[341]2.9431,[342]2.9453,[343]2.9506,[344]2.9554,[345]2.9646,[346]2.9647,[347]2.9571,[348]2.9508,[349]2.9455,[350]2.9394,[351]2.9323,[352]2.9293,[353]2.9257,[354]2.9202,[355]2.9146,[356]2.9104,[357]2.9048,[358]2.8996,[359]2.8981,[360]2.8936,[361]2.8872,[362]2.8809,[363]2.8756,[364]2.8737,[365]2.8691,[366]2.8671,[367]2.8626,[368]2.8569,[369]2.8514,[370]2.8496,[371]2.8454,[372]2.8452,[373]2.8438,[374]2.8442,[375]2.8417,[376]2.8372,[377]2.8332,[378]2.8311,[379]2.8324,[380]2.8271,[381]2.8238,[382]2.8205,[383]2.8241,[384]2.8305,[385]2.8356,[386]2.8436,[387]2.8481,[388]2.8548,[389]2.8623,[390]2.8645,[391]2.8576,[392]2.8516,[393]2.8449,[394]2.8446,[395]2.8389,[396]2.8344,[397]2.8283,[398]2.8215,[399]2.8161,[400]2.8103,[401]2.8039,[402]2.7981,[403]2.7917,[404]2.7848,[405]2.7790,[406]2.7722,[407]2.7660,[408]2.7594,[409]2.7545,[410]2.7485,[411]2.7431,[412]2.7392,[413]2.7354,[414]2.7340,[415]2.7311,[416]2.7287,[417]2.7237,[418]2.7182,[419]2.7234,[420]2.7188,[421]2.7163,[422]2.7181,[423]2.7152,[424]2.7106,[425]2.7067,[426]2.7042,[427]2.7023,[428]2.6991,[429]2.6948,[430]2.6909,[431]2.6917,[432]2.6879,[433]2.6837,[434]2.6801,[435]2.6765,[436]2.6713,[437]2.6659,[438]2.6612,[439]2.6606,[440]2.6572,[441]2.6555,[442]2.6512,[443]2.6569,[444]2.6641,[445]2.6619,[446]2.6615,[447]2.6636,[448]2.6658,[449]2.6722,[450]2.6740,[451]2.6764,[452]2.6805,[453]2.6881,[454]2.6938,[455]2.6967,[456]2.7022,[457]2.7007,[458]2.7045,[459]2.7067,[460]2.7130,[461]2.7190,[462]2.7224,[463]2.7228,[464]2.7214,[465]2.7208,[466]2.7264,[467]2.7257,[468]2.7230,[469]2.7287,[470]2.7306,[471]2.7333,[472]2.7367,[473]2.7384,[474]2.7401,[475]2.7423,[476]2.7453,[477]2.7486,[478]2.7509,[479]2.7537,[480]2.7561,[481]2.7595,[482]2.7620,[483]2.7651,[484]2.7622,[485]2.7665,[486]2.7685,[487]2.7748,[488]2.7797,[489]2.7851,[490]2.7847,[491]2.7901,[492]2.7947,[493]2.7988,[494]2.8037,[495]2.8095,[496]2.8094,[497]2.8109,[498]2.8130,[499]2.8154,[500]2.8188,[501]2.8197,[502]2.8215,[503]2.8268,[504]2.8325,[505]2.8333,[506]2.8335,[507]2.8353,[508]2.8389,[509]2.8447,[510]2.8479,[511]2.8526,[512]2.8470,[513]2.8425,[514]2.8376,[515]2.8343,[516]2.8318,[517]2.8296,[518]2.8260,[519]2.8216,[520]2.8202,[521]2.8173,[522]2.8131,[523]2.8098,[524]2.8126,[525]2.8101,[526]2.8067,[527]2.8066,[528]2.8043,[529]2.8007,[530]2.7976,[531]2.7949,[532]2.7943,[533]2.7919,[534]2.7907,[535]2.7883,[536]2.7866,[537]2.7816,[538]2.7776,[539]2.7740,[540]2.7738,[541]2.7737,[542]2.7714,[543]2.7699,[544]2.7694,[545]2.7671,[546]2.7671,[547]2.7642,[548]2.7616,[549]2.7587,[550]2.7543,[551]2.7494,[552]2.7454,[553]2.7413,[554]2.7373,[555]2.7328,[556]2.7292,[557]2.7248,[558]2.7247,[559]2.7219,[560]2.7208,[561]2.7213,[562]2.7221,[563]2.7256,[564]2.7277,[565]2.7257,
|
| 391 |
+
llama_print_timings: load time = 192558.13 ms
|
| 392 |
+
llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
|
| 393 |
+
llama_print_timings: prompt eval time = 1416065.80 ms / 289280 tokens ( 4.90 ms per token, 204.28 tokens per second)
|
| 394 |
+
llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
|
| 395 |
+
llama_print_timings: total time = 1426160.62 ms / 289281 tokens
|
| 396 |
+
|
| 397 |
+
Final estimate: PPL over 565 chunks for n_ctx=512 = 2.7257 +/- 0.01498
|
logs/perplexity-GLM-5.1-smol-IQ1_KT.log
ADDED
|
@@ -0,0 +1,409 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
numactl -N "$SOCKET" -m "$SOCKET" \
|
| 2 |
+
./build/bin/llama-perplexity \
|
| 3 |
+
-m "$model" \
|
| 4 |
+
-f wiki.test.raw \
|
| 5 |
+
--seed 1337 \
|
| 6 |
+
--ctx-size 512 \
|
| 7 |
+
-mla 3 \
|
| 8 |
+
-muge \
|
| 9 |
+
--merge-qkv \
|
| 10 |
+
-ub 4096 -b 4096 \
|
| 11 |
+
--no-mmap \
|
| 12 |
+
--numa numactl \
|
| 13 |
+
--threads 96 \
|
| 14 |
+
--threads-batch 128 \
|
| 15 |
+
--validate-quants
|
| 16 |
+
|
| 17 |
+
SOCKET is set to: 0
|
| 18 |
+
main: build = 4391 (67fc9c5e)
|
| 19 |
+
main: built with cc (Ubuntu 13.3.0-6ubuntu2~24.04.1) 13.3.0 for x86_64-linux-gnu
|
| 20 |
+
main: seed = 1337
|
| 21 |
+
CPU: using device CPU - 0 MiB free
|
| 22 |
+
llama_model_loader: additional 4 GGUFs metadata loaded.
|
| 23 |
+
llama_model_loader: loaded meta data with 62 key-value pairs and 1809 tensors from /mnt/raid/hf/GLM-5.1-GGUF/smol-IQ1_KT/GLM-5.1-smol-IQ1_KT-00001-of-00005.gguf (version GGUF V3 (latest))
|
| 24 |
+
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
|
| 25 |
+
llama_model_loader: - kv 0: general.architecture str = glm-dsa
|
| 26 |
+
llama_model_loader: - kv 1: general.type str = model
|
| 27 |
+
llama_model_loader: - kv 2: general.sampling.top_p f32 = 0.950000
|
| 28 |
+
llama_model_loader: - kv 3: general.sampling.temp f32 = 1.000000
|
| 29 |
+
llama_model_loader: - kv 4: general.name str = GLM 5.1
|
| 30 |
+
llama_model_loader: - kv 5: general.version str = 5.1
|
| 31 |
+
llama_model_loader: - kv 6: general.basename str = GLM
|
| 32 |
+
llama_model_loader: - kv 7: general.size_label str = 256x22B
|
| 33 |
+
llama_model_loader: - kv 8: general.license str = mit
|
| 34 |
+
llama_model_loader: - kv 9: general.tags arr[str,1] = ["text-generation"]
|
| 35 |
+
llama_model_loader: - kv 10: general.languages arr[str,2] = ["en", "zh"]
|
| 36 |
+
llama_model_loader: - kv 11: glm-dsa.block_count u32 = 79
|
| 37 |
+
llama_model_loader: - kv 12: glm-dsa.context_length u32 = 202752
|
| 38 |
+
llama_model_loader: - kv 13: glm-dsa.embedding_length u32 = 6144
|
| 39 |
+
llama_model_loader: - kv 14: glm-dsa.feed_forward_length u32 = 12288
|
| 40 |
+
llama_model_loader: - kv 15: glm-dsa.attention.head_count u32 = 64
|
| 41 |
+
llama_model_loader: - kv 16: glm-dsa.attention.head_count_kv u32 = 1
|
| 42 |
+
llama_model_loader: - kv 17: glm-dsa.rope.freq_base f32 = 1000000.000000
|
| 43 |
+
llama_model_loader: - kv 18: glm-dsa.attention.layer_norm_rms_epsilon f32 = 0.000010
|
| 44 |
+
llama_model_loader: - kv 19: glm-dsa.expert_used_count u32 = 8
|
| 45 |
+
llama_model_loader: - kv 20: glm-dsa.expert_group_count u32 = 1
|
| 46 |
+
llama_model_loader: - kv 21: glm-dsa.expert_group_used_count u32 = 1
|
| 47 |
+
llama_model_loader: - kv 22: glm-dsa.expert_gating_func u32 = 2
|
| 48 |
+
llama_model_loader: - kv 23: general.file_type u32 = 156
|
| 49 |
+
llama_model_loader: - kv 24: glm-dsa.leading_dense_block_count u32 = 3
|
| 50 |
+
llama_model_loader: - kv 25: glm-dsa.vocab_size u32 = 154880
|
| 51 |
+
llama_model_loader: - kv 26: glm-dsa.attention.q_lora_rank u32 = 2048
|
| 52 |
+
llama_model_loader: - kv 27: glm-dsa.attention.kv_lora_rank u32 = 512
|
| 53 |
+
llama_model_loader: - kv 28: glm-dsa.attention.key_length u32 = 576
|
| 54 |
+
llama_model_loader: - kv 29: glm-dsa.attention.value_length u32 = 512
|
| 55 |
+
llama_model_loader: - kv 30: glm-dsa.attention.key_length_mla u32 = 256
|
| 56 |
+
llama_model_loader: - kv 31: glm-dsa.attention.value_length_mla u32 = 256
|
| 57 |
+
llama_model_loader: - kv 32: glm-dsa.expert_feed_forward_length u32 = 2048
|
| 58 |
+
llama_model_loader: - kv 33: glm-dsa.expert_count u32 = 256
|
| 59 |
+
llama_model_loader: - kv 34: glm-dsa.expert_shared_count u32 = 1
|
| 60 |
+
llama_model_loader: - kv 35: glm-dsa.expert_weights_scale f32 = 2.500000
|
| 61 |
+
llama_model_loader: - kv 36: glm-dsa.expert_weights_norm bool = true
|
| 62 |
+
llama_model_loader: - kv 37: glm-dsa.rope.dimension_count u32 = 64
|
| 63 |
+
llama_model_loader: - kv 38: glm-dsa.nextn_predict_layers u32 = 1
|
| 64 |
+
llama_model_loader: - kv 39: glm-dsa.attention.indexer.head_count u32 = 32
|
| 65 |
+
llama_model_loader: - kv 40: glm-dsa.attention.indexer.key_length u32 = 128
|
| 66 |
+
llama_model_loader: - kv 41: glm-dsa.attention.indexer.top_k u32 = 2048
|
| 67 |
+
llama_model_loader: - kv 42: general.quantization_version u32 = 2
|
| 68 |
+
llama_model_loader: - kv 43: tokenizer.ggml.model str = gpt2
|
| 69 |
+
llama_model_loader: - kv 44: tokenizer.ggml.pre str = glm4
|
| 70 |
+
llama_model_loader: - kv 45: tokenizer.ggml.tokens arr[str,154880] = ["!", "\"", "#", "$", "%", "&", "'", ...
|
| 71 |
+
llama_model_loader: - kv 46: tokenizer.ggml.token_type arr[i32,154880] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
|
| 72 |
+
llama_model_loader: - kv 47: tokenizer.ggml.merges arr[str,321649] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
|
| 73 |
+
llama_model_loader: - kv 48: tokenizer.ggml.eos_token_id u32 = 154820
|
| 74 |
+
llama_model_loader: - kv 49: tokenizer.ggml.padding_token_id u32 = 154820
|
| 75 |
+
llama_model_loader: - kv 50: tokenizer.ggml.bos_token_id u32 = 154822
|
| 76 |
+
llama_model_loader: - kv 51: tokenizer.ggml.eot_token_id u32 = 154827
|
| 77 |
+
llama_model_loader: - kv 52: tokenizer.ggml.unknown_token_id u32 = 154820
|
| 78 |
+
llama_model_loader: - kv 53: tokenizer.ggml.eom_token_id u32 = 154829
|
| 79 |
+
llama_model_loader: - kv 54: tokenizer.chat_template str = [gMASK]<sop>\n{%- if tools -%}\n{%- mac...
|
| 80 |
+
llama_model_loader: - kv 55: quantize.imatrix.file str = /mnt/data/models/ubergarm/GLM-5.1-GGU...
|
| 81 |
+
llama_model_loader: - kv 56: quantize.imatrix.dataset str = ubergarm-imatrix-calibration-corpus-v...
|
| 82 |
+
llama_model_loader: - kv 57: quantize.imatrix.entries_count i32 = 1003
|
| 83 |
+
llama_model_loader: - kv 58: quantize.imatrix.chunks_count i32 = 813
|
| 84 |
+
llama_model_loader: - kv 59: split.no u16 = 0
|
| 85 |
+
llama_model_loader: - kv 60: split.count u16 = 5
|
| 86 |
+
llama_model_loader: - kv 61: split.tensors.count i32 = 1809
|
| 87 |
+
llama_model_loader: - type f32: 630 tensors
|
| 88 |
+
llama_model_loader: - type q8_0: 396 tensors
|
| 89 |
+
llama_model_loader: - type iq4_k: 1 tensors
|
| 90 |
+
llama_model_loader: - type iq6_k: 317 tensors
|
| 91 |
+
llama_model_loader: - type iq5_ks: 240 tensors
|
| 92 |
+
llama_model_loader: - type iq1_kt: 225 tensors
|
| 93 |
+
load: 0 unused tokens
|
| 94 |
+
load: special_eot_id is not in special_eog_ids - the tokenizer config may be incorrect
|
| 95 |
+
load: special_eom_id is not in special_eog_ids - the tokenizer config may be incorrect
|
| 96 |
+
load: printing all EOG tokens:
|
| 97 |
+
load: - 154820 ('<|endoftext|>')
|
| 98 |
+
load: - 154827 ('<|user|>')
|
| 99 |
+
load: - 154829 ('<|observation|>')
|
| 100 |
+
load: special tokens cache size = 36
|
| 101 |
+
load: token to piece cache size = 0.9811 MB
|
| 102 |
+
llm_load_print_meta: format = GGUF V3 (latest)
|
| 103 |
+
llm_load_print_meta: arch = glm-dsa
|
| 104 |
+
llm_load_print_meta: n_ctx_train = 202752
|
| 105 |
+
llm_load_print_meta: n_embd = 6144
|
| 106 |
+
llm_load_print_meta: n_layer = 79
|
| 107 |
+
llm_load_print_meta: n_head = 64
|
| 108 |
+
llm_load_print_meta: n_head_kv = 64
|
| 109 |
+
llm_load_print_meta: n_rot = 64
|
| 110 |
+
llm_load_print_meta: n_swa = 0
|
| 111 |
+
llm_load_print_meta: n_swa_pattern = 1
|
| 112 |
+
llm_load_print_meta: n_embd_head_k = 256
|
| 113 |
+
llm_load_print_meta: n_embd_head_v = 256
|
| 114 |
+
llm_load_print_meta: n_gqa = 1
|
| 115 |
+
llm_load_print_meta: n_embd_k_gqa = 16384
|
| 116 |
+
llm_load_print_meta: n_embd_v_gqa = 16384
|
| 117 |
+
llm_load_print_meta: f_norm_eps = 0.0e+00
|
| 118 |
+
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
|
| 119 |
+
llm_load_print_meta: f_clamp_kqv = 0.0e+00
|
| 120 |
+
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
|
| 121 |
+
llm_load_print_meta: f_logit_scale = 0.0e+00
|
| 122 |
+
llm_load_print_meta: n_ff = 12288
|
| 123 |
+
llm_load_print_meta: n_expert = 256
|
| 124 |
+
llm_load_print_meta: n_expert_used = 8
|
| 125 |
+
llm_load_print_meta: causal attn = 1
|
| 126 |
+
llm_load_print_meta: pooling type = 0
|
| 127 |
+
llm_load_print_meta: rope type = 0
|
| 128 |
+
llm_load_print_meta: rope scaling = linear
|
| 129 |
+
llm_load_print_meta: freq_base_train = 1000000.0
|
| 130 |
+
llm_load_print_meta: freq_scale_train = 1
|
| 131 |
+
llm_load_print_meta: n_ctx_orig_yarn = 202752
|
| 132 |
+
llm_load_print_meta: rope_finetuned = unknown
|
| 133 |
+
llm_load_print_meta: ssm_d_conv = 0
|
| 134 |
+
llm_load_print_meta: ssm_d_inner = 0
|
| 135 |
+
llm_load_print_meta: ssm_d_state = 0
|
| 136 |
+
llm_load_print_meta: ssm_dt_rank = 0
|
| 137 |
+
llm_load_print_meta: ssm_n_group = 0
|
| 138 |
+
llm_load_print_meta: model type = 744B.A40B
|
| 139 |
+
llm_load_print_meta: model ftype = IQ1_KT - 1.75 bpw
|
| 140 |
+
llm_load_print_meta: model params = 753.864 B
|
| 141 |
+
llm_load_print_meta: model size = 169.190 GiB (1.928 BPW)
|
| 142 |
+
llm_load_print_meta: repeating layers = 167.958 GiB (1.919 BPW, 751.961 B parameters)
|
| 143 |
+
llm_load_print_meta: general.name = GLM 5.1
|
| 144 |
+
llm_load_print_meta: n_layer_dense_lead = 3
|
| 145 |
+
llm_load_print_meta: n_lora_q = 2048
|
| 146 |
+
llm_load_print_meta: n_lora_kv = 512
|
| 147 |
+
llm_load_print_meta: n_ff_exp = 2048
|
| 148 |
+
llm_load_print_meta: n_expert_shared = 1
|
| 149 |
+
llm_load_print_meta: expert_weights_scale = 2.5
|
| 150 |
+
llm_load_print_meta: expert_weights_norm = 1
|
| 151 |
+
llm_load_print_meta: expert_gating_func = sigmoid
|
| 152 |
+
llm_load_print_meta: rope_yarn_log_mul = 0.0000
|
| 153 |
+
print_info: vocab type = BPE
|
| 154 |
+
print_info: n_vocab = 154880
|
| 155 |
+
print_info: n_merges = 321649
|
| 156 |
+
print_info: BOS token = 154822 '[gMASK]'
|
| 157 |
+
print_info: EOS token = 154820 '<|endoftext|>'
|
| 158 |
+
print_info: EOT token = 154827 '<|user|>'
|
| 159 |
+
print_info: EOM token = 154829 '<|observation|>'
|
| 160 |
+
print_info: UNK token = 154820 '<|endoftext|>'
|
| 161 |
+
print_info: PAD token = 154820 '<|endoftext|>'
|
| 162 |
+
print_info: LF token = 198 'Ċ'
|
| 163 |
+
print_info: FIM PRE token = 154838 '<|code_prefix|>'
|
| 164 |
+
print_info: FIM SUF token = 154840 '<|code_suffix|>'
|
| 165 |
+
print_info: FIM MID token = 154839 '<|code_middle|>'
|
| 166 |
+
print_info: EOG token = 154820 '<|endoftext|>'
|
| 167 |
+
print_info: EOG token = 154827 '<|user|>'
|
| 168 |
+
print_info: EOG token = 154829 '<|observation|>'
|
| 169 |
+
print_info: max token length = 1024
|
| 170 |
+
======================================= HAVE_FANCY_SIMD is defined
|
| 171 |
+
Free memory 0 MiB on device 0 is less the 1024 MiB safety margin
|
| 172 |
+
------------------- Layer sizes:
|
| 173 |
+
Layer 0: 334.94, 4.50, 339.44 672.00 MiB
|
| 174 |
+
Layer 1: 334.94, 4.50, 339.44 672.00 MiB
|
| 175 |
+
Layer 2: 334.94, 4.50, 339.44 672.00 MiB
|
| 176 |
+
Layer 3: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 177 |
+
Layer 4: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 178 |
+
Layer 5: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 179 |
+
Layer 6: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 180 |
+
Layer 7: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 181 |
+
Layer 8: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 182 |
+
Layer 9: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 183 |
+
Layer 10: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 184 |
+
Layer 11: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 185 |
+
Layer 12: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 186 |
+
Layer 13: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 187 |
+
Layer 14: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 188 |
+
Layer 15: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 189 |
+
Layer 16: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 190 |
+
Layer 17: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 191 |
+
Layer 18: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 192 |
+
Layer 19: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 193 |
+
Layer 20: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 194 |
+
Layer 21: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 195 |
+
Layer 22: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 196 |
+
Layer 23: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 197 |
+
Layer 24: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 198 |
+
Layer 25: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 199 |
+
Layer 26: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 200 |
+
Layer 27: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 201 |
+
Layer 28: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 202 |
+
Layer 29: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 203 |
+
Layer 30: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 204 |
+
Layer 31: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 205 |
+
Layer 32: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 206 |
+
Layer 33: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 207 |
+
Layer 34: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 208 |
+
Layer 35: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 209 |
+
Layer 36: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 210 |
+
Layer 37: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 211 |
+
Layer 38: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 212 |
+
Layer 39: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 213 |
+
Layer 40: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 214 |
+
Layer 41: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 215 |
+
Layer 42: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 216 |
+
Layer 43: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 217 |
+
Layer 44: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 218 |
+
Layer 45: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 219 |
+
Layer 46: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 220 |
+
Layer 47: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 221 |
+
Layer 48: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 222 |
+
Layer 49: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 223 |
+
Layer 50: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 224 |
+
Layer 51: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 225 |
+
Layer 52: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 226 |
+
Layer 53: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 227 |
+
Layer 54: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 228 |
+
Layer 55: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 229 |
+
Layer 56: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 230 |
+
Layer 57: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 231 |
+
Layer 58: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 232 |
+
Layer 59: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 233 |
+
Layer 60: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 234 |
+
Layer 61: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 235 |
+
Layer 62: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 236 |
+
Layer 63: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 237 |
+
Layer 64: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 238 |
+
Layer 65: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 239 |
+
Layer 66: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 240 |
+
Layer 67: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 241 |
+
Layer 68: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 242 |
+
Layer 69: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 243 |
+
Layer 70: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 244 |
+
Layer 71: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 245 |
+
Layer 72: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 246 |
+
Layer 73: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 247 |
+
Layer 74: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 248 |
+
Layer 75: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 249 |
+
Layer 76: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 250 |
+
Layer 77: 2248.74, 4.50, 2253.24 512.00 MiB
|
| 251 |
+
Layer 78: 0.00, 4.50, 4.50 96.00 MiB
|
| 252 |
+
Layer 79: 751.52, 1748.00, 2499.52 MiB (output layer)
|
| 253 |
+
--------------------------------------------------------------------------
|
| 254 |
+
Total : 169660.50, 2103.50, 171764.00 MiB
|
| 255 |
+
Free memory 0 MiB on device 0 is less the required compute buffer size 672 MiB
|
| 256 |
+
Memory required for model tensors + cache: 172516 MiB
|
| 257 |
+
Memory available on all devices - compute: 0 MiB
|
| 258 |
+
llm_load_tensors: ggml ctx size = 0.77 MiB
|
| 259 |
+
model has unused tensor blk.78.attn_norm.weight (size = 24576 bytes) -- ignoring
|
| 260 |
+
model has unused tensor blk.78.attn_q_a_norm.weight (size = 8192 bytes) -- ignoring
|
| 261 |
+
model has unused tensor blk.78.attn_kv_a_norm.weight (size = 2048 bytes) -- ignoring
|
| 262 |
+
model has unused tensor blk.78.attn_q_a.weight (size = 10420224 bytes) -- ignoring
|
| 263 |
+
model has unused tensor blk.78.attn_q_b.weight (size = 27787264 bytes) -- ignoring
|
| 264 |
+
model has unused tensor blk.78.attn_kv_a_mqa.weight (size = 3760128 bytes) -- ignoring
|
| 265 |
+
model has unused tensor blk.78.attn_output.weight (size = 83361792 bytes) -- ignoring
|
| 266 |
+
model has unused tensor blk.78.indexer.k_norm.weight (size = 512 bytes) -- ignoring
|
| 267 |
+
model has unused tensor blk.78.indexer.k_norm.bias (size = 512 bytes) -- ignoring
|
| 268 |
+
model has unused tensor blk.78.indexer.proj.weight (size = 208896 bytes) -- ignoring
|
| 269 |
+
model has unused tensor blk.78.indexer.attn_k.weight (size = 835584 bytes) -- ignoring
|
| 270 |
+
model has unused tensor blk.78.indexer.attn_q_b.weight (size = 6946816 bytes) -- ignoring
|
| 271 |
+
model has unused tensor blk.78.ffn_norm.weight (size = 24576 bytes) -- ignoring
|
| 272 |
+
model has unused tensor blk.78.ffn_gate_inp.weight (size = 6291456 bytes) -- ignoring
|
| 273 |
+
model has unused tensor blk.78.exp_probs_b.bias (size = 1024 bytes) -- ignoring
|
| 274 |
+
model has unused tensor blk.78.ffn_gate_exps.weight (size = 2116026368 bytes) -- ignoring
|
| 275 |
+
model has unused tensor blk.78.ffn_down_exps.weight (size = 2120220672 bytes) -- ignoring
|
| 276 |
+
model has unused tensor blk.78.ffn_up_exps.weight (size = 2116026368 bytes) -- ignoring
|
| 277 |
+
model has unused tensor blk.78.ffn_gate_shexp.weight (size = 8265728 bytes) -- ignoring
|
| 278 |
+
model has unused tensor blk.78.ffn_down_shexp.weight (size = 8282112 bytes) -- ignoring
|
| 279 |
+
model has unused tensor blk.78.ffn_up_shexp.weight (size = 8265728 bytes) -- ignoring
|
| 280 |
+
model has unused tensor blk.78.nextn.eh_proj.weight (size = 80216064 bytes) -- ignoring
|
| 281 |
+
model has unused tensor blk.78.nextn.enorm.weight (size = 24576 bytes) -- ignoring
|
| 282 |
+
model has unused tensor blk.78.nextn.hnorm.weight (size = 24576 bytes) -- ignoring
|
| 283 |
+
model has unused tensor blk.78.nextn.shared_head_norm.weight (size = 24576 bytes) -- ignoring
|
| 284 |
+
llm_load_tensors: offloading 0 repeating layers to GPU
|
| 285 |
+
llm_load_tensors: offloaded 0/80 layers to GPU
|
| 286 |
+
llm_load_tensors: CPU buffer size = 166959.39 MiB
|
| 287 |
+
....................................................................................................
|
| 288 |
+
============ llm_prepare_mla: need to compute 79 wkv_b tensors
|
| 289 |
+
================= Adjusted mainline llama.cpp MLA tensors to ik_llama.cpp
|
| 290 |
+
Computed blk.0.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 291 |
+
Computed blk.1.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 292 |
+
Computed blk.2.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 293 |
+
Computed blk.3.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 294 |
+
Computed blk.4.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 295 |
+
Computed blk.5.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 296 |
+
Computed blk.6.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 297 |
+
Computed blk.7.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 298 |
+
Computed blk.8.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 299 |
+
Computed blk.9.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 300 |
+
Computed blk.10.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 301 |
+
Computed blk.11.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 302 |
+
Computed blk.12.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 303 |
+
Computed blk.13.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 304 |
+
Computed blk.14.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 305 |
+
Computed blk.15.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 306 |
+
Computed blk.16.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 307 |
+
Computed blk.17.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 308 |
+
Computed blk.18.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 309 |
+
Computed blk.19.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 310 |
+
Computed blk.20.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 311 |
+
Computed blk.21.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 312 |
+
Computed blk.22.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 313 |
+
Computed blk.23.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 314 |
+
Computed blk.24.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 315 |
+
Computed blk.25.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 316 |
+
Computed blk.26.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 317 |
+
Computed blk.27.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 318 |
+
Computed blk.28.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 319 |
+
Computed blk.29.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 320 |
+
Computed blk.30.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 321 |
+
Computed blk.31.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 322 |
+
Computed blk.32.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 323 |
+
Computed blk.33.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 324 |
+
Computed blk.34.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 325 |
+
Computed blk.35.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 326 |
+
Computed blk.36.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 327 |
+
Computed blk.37.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 328 |
+
Computed blk.38.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 329 |
+
Computed blk.39.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 330 |
+
Computed blk.40.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 331 |
+
Computed blk.41.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 332 |
+
Computed blk.42.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 333 |
+
Computed blk.43.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 334 |
+
Computed blk.44.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 335 |
+
Computed blk.45.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 336 |
+
Computed blk.46.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in bllama_init_from_model: n_ctx = 4096
|
| 337 |
+
llama_init_from_model: n_batch = 4096
|
| 338 |
+
llama_init_from_model: n_ubatch = 4096
|
| 339 |
+
llama_init_from_model: flash_attn = 1
|
| 340 |
+
llama_init_from_model: mla_attn = 3
|
| 341 |
+
llama_init_from_model: attn_max_b = 0
|
| 342 |
+
llama_init_from_model: fused_moe = 1
|
| 343 |
+
llama_init_from_model: grouped er = 0
|
| 344 |
+
llama_init_from_model: fused_up_gate = 1
|
| 345 |
+
llama_init_from_model: fused_mmad = 1
|
| 346 |
+
llama_init_from_model: rope_cache = 0
|
| 347 |
+
llama_init_from_model: graph_reuse = 1
|
| 348 |
+
llama_init_from_model: k_cache_hadam = 0
|
| 349 |
+
llama_init_from_model: v_cache_hadam = 0
|
| 350 |
+
llama_init_from_model: split_mode_graph_scheduling = 0
|
| 351 |
+
llama_init_from_model: reduce_type = f16
|
| 352 |
+
llama_init_from_model: sched_async = 0
|
| 353 |
+
llama_init_from_model: ser = -1, 0
|
| 354 |
+
llama_init_from_model: freq_base = 1000000.0
|
| 355 |
+
llama_init_from_model: freq_scale = 1
|
| 356 |
+
llama_kv_cache_init: CPU KV buffer size = 351.00 MiB
|
| 357 |
+
llama_init_from_model: KV self size = 351.00 MiB, c^KV (f16): 351.00 MiB, kv^T: not used
|
| 358 |
+
llama_init_from_model: CPU output buffer size = 4.73 MiB
|
| 359 |
+
llama_init_from_model: CPU compute buffer size = 2516.00 MiB
|
| 360 |
+
llama_init_from_model: graph nodes = 4166
|
| 361 |
+
llama_init_from_model: graph splits = 1
|
| 362 |
+
llama_init_from_model: enabling only_active_experts scheduling
|
| 363 |
+
|
| 364 |
+
system_info: n_threads = 96 (n_threads_batch = 128) / 512 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
|
| 365 |
+
perplexity: tokenizing the input ..
|
| 366 |
+
perplexity: tokenization took 358.279 ms
|
| 367 |
+
perplexity: calculating perplexity over 565 chunks, n_ctx=512, batch_size=4096, n_seq=8
|
| 368 |
+
perplexity: 21.15 seconds per pass - ETA 24.88 minutes
|
| 369 |
+
uffer CPU
|
| 370 |
+
Computed blk.47.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 371 |
+
Computed blk.48.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 372 |
+
Computed blk.49.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 373 |
+
Computed blk.50.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 374 |
+
Computed blk.51.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 375 |
+
Computed blk.52.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 376 |
+
Computed blk.53.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 377 |
+
Computed blk.54.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 378 |
+
Computed blk.55.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 379 |
+
Computed blk.56.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 380 |
+
Computed blk.57.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 381 |
+
Computed blk.58.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 382 |
+
Computed blk.59.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 383 |
+
Computed blk.60.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 384 |
+
Computed blk.61.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 385 |
+
Computed blk.62.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 386 |
+
Computed blk.63.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 387 |
+
Computed blk.64.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 388 |
+
Computed blk.65.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 389 |
+
Computed blk.66.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 390 |
+
Computed blk.67.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 391 |
+
Computed blk.68.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 392 |
+
Computed blk.69.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 393 |
+
Computed blk.70.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 394 |
+
Computed blk.71.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 395 |
+
Computed blk.72.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 396 |
+
Computed blk.73.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 397 |
+
Computed blk.74.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 398 |
+
Computed blk.75.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 399 |
+
Computed blk.76.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 400 |
+
Computed blk.77.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 401 |
+
Computed blk.78.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 402 |
+
[1]2.9323,[2]3.7957,[3]3.0994,[4]2.7934,[5]2.7828,[6]2.8859,[7]2.9217,[8]2.8890,[9]3.0828,[10]3.0597,[11]3.0487,[12]3.1718,[13]3.2383,[14]3.2564,[15]3.3649,[16]3.4639,[17]3.5795,[18]3.8030,[19]3.7453,[20]3.7414,[21]3.8273,[22]3.7886,[23]3.7163,[24]3.6445,[25]3.5954,[26]3.5710,[27]3.5551,[28]3.5744,[29]3.5851,[30]3.6495,[31]3.7132,[32]3.7522,[33]3.8156,[34]3.8527,[35]3.9298,[36]3.9801,[37]3.9861,[38]4.0826,[39]4.1010,[40]4.1362,[41]4.2107,[42]4.2179,[43]4.2192,[44]4.2471,[45]4.3367,[46]4.4100,[47]4.3673,[48]4.3516,[49]4.3347,[50]4.3509,[51]4.3900,[52]4.4241,[53]4.4808,[54]4.5239,[55]4.5569,[56]4.5887,[57]4.5837,[58]4.6008,[59]4.5997,[60]4.6478,[61]4.6825,[62]4.7399,[63]4.8009,[64]4.8280,[65]4.8406,[66]4.8502,[67]4.8412,[68]4.8368,[69]4.8679,[70]4.8704,[71]4.8711,[72]4.8585,[73]4.8620,[74]4.8900,[75]4.9115,[76]4.8631,[77]4.8229,[78]4.7936,[79]4.7969,[80]4.7950,[81]4.7818,[82]4.7730,[83]4.7750,[84]4.7591,[85]4.7632,[86]4.7536,[87]4.7603,[88]4.7594,[89]4.7359,[90]4.7428,[91]4.7317,[92]4.7269,[93]4.7344,[94]4.7452,[95]4.7290,[96]4.7483,[97]4.7725,[98]4.7828,[99]4.7643,[100]4.7541,[101]4.7611,[102]4.7467,[103]4.7531,[104]4.7653,[105]4.7889,[106]4.8182,[107]4.8938,[108]4.8988,[109]4.9015,[110]4.9424,[111]4.9660,[112]4.9331,[113]4.9226,[114]4.9258,[115]4.9024,[116]4.9151,[117]4.9061,[118]4.8999,[119]4.8895,[120]4.8731,[121]4.8613,[122]4.8540,[123]4.8189,[124]4.8046,[125]4.8016,[126]4.7915,[127]4.7783,[128]4.7923,[129]4.7964,[130]4.7960,[131]4.8103,[132]4.8096,[133]4.8048,[134]4.8142,[135]4.8323,[136]4.8317,[137]4.8355,[138]4.8275,[139]4.8167,[140]4.8312,[141]4.8199,[142]4.8066,[143]4.7909,[144]4.7790,[145]4.7635,[146]4.7454,[147]4.7291,[148]4.7215,[149]4.7052,[150]4.7055,[151]4.6873,[152]4.6725,[153]4.6638,[154]4.6497,[155]4.6377,[156]4.6348,[157]4.6280,[158]4.6199,[159]4.6265,[160]4.6322,[161]4.6356,[162]4.6415,[163]4.6375,[164]4.6497,[165]4.6604,[166]4.6769,[167]4.6995,[168]4.7070,[169]4.7431,[170]4.7676,[171]4.7871,[172]4.8144,[173]4.8010,[174]4.7827,[175]4.7739,[176]4.7624,[177]4.7433,[178]4.7254,[179]4.7140,[180]4.7066,[181]4.7005,[182]4.7148,[183]4.7364,[184]4.7647,[185]4.7853,[186]4.7903,[187]4.8103,[188]4.8398,[189]4.8653,[190]4.8806,[191]4.8947,[192]4.8961,[193]4.8962,[194]4.8996,[195]4.8965,[196]4.9114,[197]4.9290,[198]4.9427,[199]4.9377,[200]4.9460,[201]4.9391,[202]4.9411,[203]4.9344,[204]4.9384,[205]4.9425,[206]4.9453,[207]4.9456,[208]4.9486,[209]4.9422,[210]4.9383,[211]4.9351,[212]4.9318,[213]4.9312,[214]4.9306,[215]4.9335,[216]4.9364,[217]4.9387,[218]4.9472,[219]4.9371,[220]4.9286,[221]4.9197,[222]4.9162,[223]4.9104,[224]4.9124,[225]4.9106,[226]4.9157,[227]4.9085,[228]4.9013,[229]4.8935,[230]4.8864,[231]4.8850,[232]4.8815,[233]4.8816,[234]4.8756,[235]4.8656,[236]4.8641,[237]4.8571,[238]4.8612,[239]4.8799,[240]4.8959,[241]4.9107,[242]4.9199,[243]4.9345,[244]4.9493,[245]4.9645,[246]4.9782,[247]4.9971,[248]5.0113,[249]5.0094,[250]5.0058,[251]4.9884,[252]4.9751,[253]4.9609,[254]4.9543,[255]4.9537,[256]4.9538,[257]4.9496,[258]4.9482,[259]4.9386,[260]4.9309,[261]4.9219,[262]4.9169,[263]4.9064,[264]4.8970,[265]4.8942,[266]4.8911,[267]4.8806,[268]4.8737,[269]4.8662,[270]4.8640,[271]4.8588,[272]4.8494,[273]4.8470,[274]4.8318,[275]4.8263,[276]4.8188,[277]4.8108,[278]4.8018,[279]4.8008,[280]4.8054,[281]4.8083,[282]4.8136,[283]4.8196,[284]4.8229,[285]4.8272,[286]4.8353,[287]4.8513,[288]4.8480,[289]4.8450,[290]4.8461,[291]4.8469,[292]4.8423,[293]4.8343,[294]4.8411,[295]4.8448,[296]4.8422,[297]4.8392,[298]4.8386,[299]4.8340,[300]4.8346,[301]4.8386,[302]4.8360,[303]4.8346,[304]4.8321,[305]4.8260,[306]4.8211,[307]4.8256,[308]4.8276,[309]4.8396,[310]4.8286,[311]4.8276,[312]4.8206,[313]4.8232,[314]4.8225,[315]4.8259,[316]4.8237,[317]4.8250,[318]4.8245,[319]4.8213,[320]4.8202,[321]4.8220,[322]4.8263,[323]4.8196,[324]4.8171,[325]4.8197,[326]4.8168,[327]4.8236,[328]4.8220,[329]4.8224,[330]4.8192,[331]4.8146,[332]4.8135,[333]4.8169,[334]4.8189,[335]4.8167,[336]4.8143,[337]4.8110,[338]4.8065,[339]4.8044,[340]4.8061,[341]4.8066,[342]4.8048,[343]4.8104,[344]4.8159,[345]4.8265,[346]4.8211,[347]4.8151,[348]4.8097,[349]4.8106,[350]4.7985,[351]4.7881,[352]4.7832,[353]4.7819,[354]4.7844,[355]4.7861,[356]4.7905,[357]4.7939,[358]4.7976,[359]4.8044,[360]4.8108,[361]4.8083,[362]4.8092,[363]4.8095,[364]4.8122,[365]4.8122,[366]4.8133,[367]4.8133,[368]4.8160,[369]4.8147,[370]4.8179,[371]4.8210,[372]4.8265,[373]4.8318,[374]4.8366,[375]4.8326,[376]4.8295,[377]4.8266,[378]4.8329,[379]4.8399,[380]4.8389,[381]4.8324,[382]4.8269,[383]4.8305,[384]4.8362,[385]4.8419,[386]4.8492,[387]4.8536,[388]4.8593,[389]4.8673,[390]4.8681,[391]4.8541,[392]4.8458,[393]4.8355,[394]4.8359,[395]4.8319,[396]4.8246,[397]4.8175,[398]4.8129,[399]4.8081,[400]4.7990,[401]4.7910,[402]4.7858,[403]4.7771,[404]4.7690,[405]4.7631,[406]4.7542,[407]4.7463,[408]4.7404,[409]4.7327,[410]4.7257,[411]4.7203,[412]4.7152,[413]4.7138,[414]4.7156,[415]4.7129,[416]4.7129,[417]4.7058,[418]4.6992,[419]4.7048,[420]4.6965,[421]4.6966,[422]4.6967,[423]4.6942,[424]4.6902,[425]4.6858,[426]4.6826,[427]4.6793,[428]4.6764,[429]4.6740,[430]4.6698,[431]4.6696,[432]4.6632,[433]4.6570,[434]4.6505,[435]4.6450,[436]4.6396,[437]4.6359,[438]4.6314,[439]4.6290,[440]4.6263,[441]4.6252,[442]4.6235,[443]4.6284,[444]4.6402,[445]4.6360,[446]4.6347,[447]4.6354,[448]4.6376,[449]4.6439,[450]4.6455,[451]4.6469,[452]4.6518,[453]4.6609,[454]4.6660,[455]4.6665,[456]4.6715,[457]4.6684,[458]4.6703,[459]4.6695,[460]4.6762,[461]4.6818,[462]4.6824,[463]4.6815,[464]4.6793,[465]4.6796,[466]4.6859,[467]4.6851,[468]4.6846,[469]4.6906,[470]4.6909,[471]4.6918,[472]4.6948,[473]4.6939,[474]4.6916,[475]4.6927,[476]4.6924,[477]4.6944,[478]4.6943,[479]4.6950,[480]4.6960,[481]4.6993,[482]4.7010,[483]4.7036,[484]4.6980,[485]4.7013,[486]4.7019,[487]4.7092,[488]4.7151,[489]4.7198,[490]4.7191,[491]4.7240,[492]4.7277,[493]4.7297,[494]4.7358,[495]4.7420,[496]4.7404,[497]4.7406,[498]4.7418,[499]4.7439,[500]4.7468,[501]4.7453,[502]4.7446,[503]4.7489,[504]4.7540,[505]4.7539,[506]4.7509,[507]4.7525,[508]4.7551,[509]4.7615,[510]4.7625,[511]4.7670,[512]4.7585,[513]4.7553,[514]4.7520,[515]4.7483,[516]4.7477,[517]4.7468,[518]4.7455,[519]4.7382,[520]4.7404,[521]4.7399,[522]4.7362,[523]4.7371,[524]4.7389,[525]4.7343,[526]4.7326,[527]4.7342,[528]4.7326,[529]4.7276,[530]4.7214,[531]4.7150,[532]4.7158,[533]4.7117,[534]4.7086,[535]4.7020,[536]4.6972,[537]4.6919,[538]4.6893,[539]4.6872,[540]4.6884,[541]4.6916,[542]4.6921,[543]4.6936,[544]4.6965,[545]4.6981,[546]4.6994,[547]4.7003,[548]4.6992,[549]4.6931,[550]4.6873,[551]4.6803,[552]4.6765,[553]4.6707,[554]4.6684,[555]4.6640,[556]4.6602,[557]4.6554,[558]4.6566,[559]4.6535,[560]4.6539,[561]4.6564,[562]4.6594,[563]4.6645,[564]4.6683,[565]4.6654,
|
| 403 |
+
llama_print_timings: load time = 32161.55 ms
|
| 404 |
+
llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
|
| 405 |
+
llama_print_timings: prompt eval time = 1380500.23 ms / 289280 tokens ( 4.77 ms per token, 209.55 tokens per second)
|
| 406 |
+
llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
|
| 407 |
+
llama_print_timings: total time = 1390731.88 ms / 289281 tokens
|
| 408 |
+
|
| 409 |
+
Final estimate: PPL over 565 chunks for n_ctx=512 = 4.6654 +/- 0.02830
|
logs/perplexity-GLM-5.1-smol-IQ2_KS.log
ADDED
|
@@ -0,0 +1,409 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
numactl -N "$SOCKET" -m "$SOCKET" \
|
| 2 |
+
./build/bin/llama-perplexity \
|
| 3 |
+
-m "$model" \
|
| 4 |
+
-f wiki.test.raw \
|
| 5 |
+
--seed 1337 \
|
| 6 |
+
--ctx-size 512 \
|
| 7 |
+
-mla 3 \
|
| 8 |
+
-muge \
|
| 9 |
+
--merge-qkv \
|
| 10 |
+
-ub 4096 -b 4096 \
|
| 11 |
+
--no-mmap \
|
| 12 |
+
--numa numactl \
|
| 13 |
+
--threads 96 \
|
| 14 |
+
--threads-batch 128 \
|
| 15 |
+
--validate-quants
|
| 16 |
+
|
| 17 |
+
SOCKET is set to: 0
|
| 18 |
+
main: build = 4391 (67fc9c5e)
|
| 19 |
+
main: built with cc (Ubuntu 13.3.0-6ubuntu2~24.04.1) 13.3.0 for x86_64-linux-gnu
|
| 20 |
+
main: seed = 1337
|
| 21 |
+
CPU: using device CPU - 0 MiB free
|
| 22 |
+
llama_model_loader: additional 5 GGUFs metadata loaded.
|
| 23 |
+
llama_model_loader: loaded meta data with 62 key-value pairs and 1809 tensors from /mnt/raid/hf/GLM-5.1-GGUF/smol-IQ2_KS/GLM-5.1-smol-IQ2_KS-00001-of-00006.gguf (version GGUF V3 (latest))
|
| 24 |
+
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
|
| 25 |
+
llama_model_loader: - kv 0: general.architecture str = glm-dsa
|
| 26 |
+
llama_model_loader: - kv 1: general.type str = model
|
| 27 |
+
llama_model_loader: - kv 2: general.sampling.top_p f32 = 0.950000
|
| 28 |
+
llama_model_loader: - kv 3: general.sampling.temp f32 = 1.000000
|
| 29 |
+
llama_model_loader: - kv 4: general.name str = GLM 5.1
|
| 30 |
+
llama_model_loader: - kv 5: general.version str = 5.1
|
| 31 |
+
llama_model_loader: - kv 6: general.basename str = GLM
|
| 32 |
+
llama_model_loader: - kv 7: general.size_label str = 256x22B
|
| 33 |
+
llama_model_loader: - kv 8: general.license str = mit
|
| 34 |
+
llama_model_loader: - kv 9: general.tags arr[str,1] = ["text-generation"]
|
| 35 |
+
llama_model_loader: - kv 10: general.languages arr[str,2] = ["en", "zh"]
|
| 36 |
+
llama_model_loader: - kv 11: glm-dsa.block_count u32 = 79
|
| 37 |
+
llama_model_loader: - kv 12: glm-dsa.context_length u32 = 202752
|
| 38 |
+
llama_model_loader: - kv 13: glm-dsa.embedding_length u32 = 6144
|
| 39 |
+
llama_model_loader: - kv 14: glm-dsa.feed_forward_length u32 = 12288
|
| 40 |
+
llama_model_loader: - kv 15: glm-dsa.attention.head_count u32 = 64
|
| 41 |
+
llama_model_loader: - kv 16: glm-dsa.attention.head_count_kv u32 = 1
|
| 42 |
+
llama_model_loader: - kv 17: glm-dsa.rope.freq_base f32 = 1000000.000000
|
| 43 |
+
llama_model_loader: - kv 18: glm-dsa.attention.layer_norm_rms_epsilon f32 = 0.000010
|
| 44 |
+
llama_model_loader: - kv 19: glm-dsa.expert_used_count u32 = 8
|
| 45 |
+
llama_model_loader: - kv 20: glm-dsa.expert_group_count u32 = 1
|
| 46 |
+
llama_model_loader: - kv 21: glm-dsa.expert_group_used_count u32 = 1
|
| 47 |
+
llama_model_loader: - kv 22: glm-dsa.expert_gating_func u32 = 2
|
| 48 |
+
llama_model_loader: - kv 23: general.file_type u32 = 147
|
| 49 |
+
llama_model_loader: - kv 24: glm-dsa.leading_dense_block_count u32 = 3
|
| 50 |
+
llama_model_loader: - kv 25: glm-dsa.vocab_size u32 = 154880
|
| 51 |
+
llama_model_loader: - kv 26: glm-dsa.attention.q_lora_rank u32 = 2048
|
| 52 |
+
llama_model_loader: - kv 27: glm-dsa.attention.kv_lora_rank u32 = 512
|
| 53 |
+
llama_model_loader: - kv 28: glm-dsa.attention.key_length u32 = 576
|
| 54 |
+
llama_model_loader: - kv 29: glm-dsa.attention.value_length u32 = 512
|
| 55 |
+
llama_model_loader: - kv 30: glm-dsa.attention.key_length_mla u32 = 256
|
| 56 |
+
llama_model_loader: - kv 31: glm-dsa.attention.value_length_mla u32 = 256
|
| 57 |
+
llama_model_loader: - kv 32: glm-dsa.expert_feed_forward_length u32 = 2048
|
| 58 |
+
llama_model_loader: - kv 33: glm-dsa.expert_count u32 = 256
|
| 59 |
+
llama_model_loader: - kv 34: glm-dsa.expert_shared_count u32 = 1
|
| 60 |
+
llama_model_loader: - kv 35: glm-dsa.expert_weights_scale f32 = 2.500000
|
| 61 |
+
llama_model_loader: - kv 36: glm-dsa.expert_weights_norm bool = true
|
| 62 |
+
llama_model_loader: - kv 37: glm-dsa.rope.dimension_count u32 = 64
|
| 63 |
+
llama_model_loader: - kv 38: glm-dsa.nextn_predict_layers u32 = 1
|
| 64 |
+
llama_model_loader: - kv 39: glm-dsa.attention.indexer.head_count u32 = 32
|
| 65 |
+
llama_model_loader: - kv 40: glm-dsa.attention.indexer.key_length u32 = 128
|
| 66 |
+
llama_model_loader: - kv 41: glm-dsa.attention.indexer.top_k u32 = 2048
|
| 67 |
+
llama_model_loader: - kv 42: general.quantization_version u32 = 2
|
| 68 |
+
llama_model_loader: - kv 43: tokenizer.ggml.model str = gpt2
|
| 69 |
+
llama_model_loader: - kv 44: tokenizer.ggml.pre str = glm4
|
| 70 |
+
llama_model_loader: - kv 45: tokenizer.ggml.tokens arr[str,154880] = ["!", "\"", "#", "$", "%", "&", "'", ...
|
| 71 |
+
llama_model_loader: - kv 46: tokenizer.ggml.token_type arr[i32,154880] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
|
| 72 |
+
llama_model_loader: - kv 47: tokenizer.ggml.merges arr[str,321649] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
|
| 73 |
+
llama_model_loader: - kv 48: tokenizer.ggml.eos_token_id u32 = 154820
|
| 74 |
+
llama_model_loader: - kv 49: tokenizer.ggml.padding_token_id u32 = 154820
|
| 75 |
+
llama_model_loader: - kv 50: tokenizer.ggml.bos_token_id u32 = 154822
|
| 76 |
+
llama_model_loader: - kv 51: tokenizer.ggml.eot_token_id u32 = 154827
|
| 77 |
+
llama_model_loader: - kv 52: tokenizer.ggml.unknown_token_id u32 = 154820
|
| 78 |
+
llama_model_loader: - kv 53: tokenizer.ggml.eom_token_id u32 = 154829
|
| 79 |
+
llama_model_loader: - kv 54: tokenizer.chat_template str = [gMASK]<sop>\n{%- if tools -%}\n{%- mac...
|
| 80 |
+
llama_model_loader: - kv 55: quantize.imatrix.file str = /mnt/data/models/ubergarm/GLM-5.1-GGU...
|
| 81 |
+
llama_model_loader: - kv 56: quantize.imatrix.dataset str = ubergarm-imatrix-calibration-corpus-v...
|
| 82 |
+
llama_model_loader: - kv 57: quantize.imatrix.entries_count i32 = 1003
|
| 83 |
+
llama_model_loader: - kv 58: quantize.imatrix.chunks_count i32 = 813
|
| 84 |
+
llama_model_loader: - kv 59: split.no u16 = 0
|
| 85 |
+
llama_model_loader: - kv 60: split.count u16 = 6
|
| 86 |
+
llama_model_loader: - kv 61: split.tensors.count i32 = 1809
|
| 87 |
+
llama_model_loader: - type f32: 630 tensors
|
| 88 |
+
llama_model_loader: - type q8_0: 396 tensors
|
| 89 |
+
llama_model_loader: - type iq4_k: 1 tensors
|
| 90 |
+
llama_model_loader: - type iq6_k: 317 tensors
|
| 91 |
+
llama_model_loader: - type iq2_ks: 225 tensors
|
| 92 |
+
llama_model_loader: - type iq5_ks: 240 tensors
|
| 93 |
+
load: 0 unused tokens
|
| 94 |
+
load: special_eot_id is not in special_eog_ids - the tokenizer config may be incorrect
|
| 95 |
+
load: special_eom_id is not in special_eog_ids - the tokenizer config may be incorrect
|
| 96 |
+
load: printing all EOG tokens:
|
| 97 |
+
load: - 154820 ('<|endoftext|>')
|
| 98 |
+
load: - 154827 ('<|user|>')
|
| 99 |
+
load: - 154829 ('<|observation|>')
|
| 100 |
+
load: special tokens cache size = 36
|
| 101 |
+
load: token to piece cache size = 0.9811 MB
|
| 102 |
+
llm_load_print_meta: format = GGUF V3 (latest)
|
| 103 |
+
llm_load_print_meta: arch = glm-dsa
|
| 104 |
+
llm_load_print_meta: n_ctx_train = 202752
|
| 105 |
+
llm_load_print_meta: n_embd = 6144
|
| 106 |
+
llm_load_print_meta: n_layer = 79
|
| 107 |
+
llm_load_print_meta: n_head = 64
|
| 108 |
+
llm_load_print_meta: n_head_kv = 64
|
| 109 |
+
llm_load_print_meta: n_rot = 64
|
| 110 |
+
llm_load_print_meta: n_swa = 0
|
| 111 |
+
llm_load_print_meta: n_swa_pattern = 1
|
| 112 |
+
llm_load_print_meta: n_embd_head_k = 256
|
| 113 |
+
llm_load_print_meta: n_embd_head_v = 256
|
| 114 |
+
llm_load_print_meta: n_gqa = 1
|
| 115 |
+
llm_load_print_meta: n_embd_k_gqa = 16384
|
| 116 |
+
llm_load_print_meta: n_embd_v_gqa = 16384
|
| 117 |
+
llm_load_print_meta: f_norm_eps = 0.0e+00
|
| 118 |
+
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
|
| 119 |
+
llm_load_print_meta: f_clamp_kqv = 0.0e+00
|
| 120 |
+
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
|
| 121 |
+
llm_load_print_meta: f_logit_scale = 0.0e+00
|
| 122 |
+
llm_load_print_meta: n_ff = 12288
|
| 123 |
+
llm_load_print_meta: n_expert = 256
|
| 124 |
+
llm_load_print_meta: n_expert_used = 8
|
| 125 |
+
llm_load_print_meta: causal attn = 1
|
| 126 |
+
llm_load_print_meta: pooling type = 0
|
| 127 |
+
llm_load_print_meta: rope type = 0
|
| 128 |
+
llm_load_print_meta: rope scaling = linear
|
| 129 |
+
llm_load_print_meta: freq_base_train = 1000000.0
|
| 130 |
+
llm_load_print_meta: freq_scale_train = 1
|
| 131 |
+
llm_load_print_meta: n_ctx_orig_yarn = 202752
|
| 132 |
+
llm_load_print_meta: rope_finetuned = unknown
|
| 133 |
+
llm_load_print_meta: ssm_d_conv = 0
|
| 134 |
+
llm_load_print_meta: ssm_d_inner = 0
|
| 135 |
+
llm_load_print_meta: ssm_d_state = 0
|
| 136 |
+
llm_load_print_meta: ssm_dt_rank = 0
|
| 137 |
+
llm_load_print_meta: ssm_n_group = 0
|
| 138 |
+
llm_load_print_meta: model type = 744B.A40B
|
| 139 |
+
llm_load_print_meta: model ftype = IQ2_KS - 2.1875 bpw
|
| 140 |
+
llm_load_print_meta: model params = 753.864 B
|
| 141 |
+
llm_load_print_meta: model size = 205.738 GiB (2.344 BPW)
|
| 142 |
+
llm_load_print_meta: repeating layers = 204.506 GiB (2.336 BPW, 751.961 B parameters)
|
| 143 |
+
llm_load_print_meta: general.name = GLM 5.1
|
| 144 |
+
llm_load_print_meta: n_layer_dense_lead = 3
|
| 145 |
+
llm_load_print_meta: n_lora_q = 2048
|
| 146 |
+
llm_load_print_meta: n_lora_kv = 512
|
| 147 |
+
llm_load_print_meta: n_ff_exp = 2048
|
| 148 |
+
llm_load_print_meta: n_expert_shared = 1
|
| 149 |
+
llm_load_print_meta: expert_weights_scale = 2.5
|
| 150 |
+
llm_load_print_meta: expert_weights_norm = 1
|
| 151 |
+
llm_load_print_meta: expert_gating_func = sigmoid
|
| 152 |
+
llm_load_print_meta: rope_yarn_log_mul = 0.0000
|
| 153 |
+
print_info: vocab type = BPE
|
| 154 |
+
print_info: n_vocab = 154880
|
| 155 |
+
print_info: n_merges = 321649
|
| 156 |
+
print_info: BOS token = 154822 '[gMASK]'
|
| 157 |
+
print_info: EOS token = 154820 '<|endoftext|>'
|
| 158 |
+
print_info: EOT token = 154827 '<|user|>'
|
| 159 |
+
print_info: EOM token = 154829 '<|observation|>'
|
| 160 |
+
print_info: UNK token = 154820 '<|endoftext|>'
|
| 161 |
+
print_info: PAD token = 154820 '<|endoftext|>'
|
| 162 |
+
print_info: LF token = 198 'Ċ'
|
| 163 |
+
print_info: FIM PRE token = 154838 '<|code_prefix|>'
|
| 164 |
+
print_info: FIM SUF token = 154840 '<|code_suffix|>'
|
| 165 |
+
print_info: FIM MID token = 154839 '<|code_middle|>'
|
| 166 |
+
print_info: EOG token = 154820 '<|endoftext|>'
|
| 167 |
+
print_info: EOG token = 154827 '<|user|>'
|
| 168 |
+
print_info: EOG token = 154829 '<|observation|>'
|
| 169 |
+
print_info: max token length = 1024
|
| 170 |
+
======================================= HAVE_FANCY_SIMD is defined
|
| 171 |
+
Free memory 0 MiB on device 0 is less the 1024 MiB safety margin
|
| 172 |
+
------------------- Layer sizes:
|
| 173 |
+
Layer 0: 334.94, 4.50, 339.44 672.00 MiB
|
| 174 |
+
Layer 1: 334.94, 4.50, 339.44 672.00 MiB
|
| 175 |
+
Layer 2: 334.94, 4.50, 339.44 672.00 MiB
|
| 176 |
+
Layer 3: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 177 |
+
Layer 4: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 178 |
+
Layer 5: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 179 |
+
Layer 6: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 180 |
+
Layer 7: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 181 |
+
Layer 8: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 182 |
+
Layer 9: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 183 |
+
Layer 10: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 184 |
+
Layer 11: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 185 |
+
Layer 12: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 186 |
+
Layer 13: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 187 |
+
Layer 14: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 188 |
+
Layer 15: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 189 |
+
Layer 16: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 190 |
+
Layer 17: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 191 |
+
Layer 18: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 192 |
+
Layer 19: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 193 |
+
Layer 20: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 194 |
+
Layer 21: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 195 |
+
Layer 22: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 196 |
+
Layer 23: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 197 |
+
Layer 24: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 198 |
+
Layer 25: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 199 |
+
Layer 26: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 200 |
+
Layer 27: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 201 |
+
Layer 28: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 202 |
+
Layer 29: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 203 |
+
Layer 30: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 204 |
+
Layer 31: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 205 |
+
Layer 32: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 206 |
+
Layer 33: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 207 |
+
Layer 34: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 208 |
+
Layer 35: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 209 |
+
Layer 36: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 210 |
+
Layer 37: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 211 |
+
Layer 38: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 212 |
+
Layer 39: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 213 |
+
Layer 40: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 214 |
+
Layer 41: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 215 |
+
Layer 42: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 216 |
+
Layer 43: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 217 |
+
Layer 44: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 218 |
+
Layer 45: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 219 |
+
Layer 46: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 220 |
+
Layer 47: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 221 |
+
Layer 48: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 222 |
+
Layer 49: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 223 |
+
Layer 50: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 224 |
+
Layer 51: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 225 |
+
Layer 52: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 226 |
+
Layer 53: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 227 |
+
Layer 54: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 228 |
+
Layer 55: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 229 |
+
Layer 56: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 230 |
+
Layer 57: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 231 |
+
Layer 58: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 232 |
+
Layer 59: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 233 |
+
Layer 60: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 234 |
+
Layer 61: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 235 |
+
Layer 62: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 236 |
+
Layer 63: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 237 |
+
Layer 64: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 238 |
+
Layer 65: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 239 |
+
Layer 66: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 240 |
+
Layer 67: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 241 |
+
Layer 68: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 242 |
+
Layer 69: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 243 |
+
Layer 70: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 244 |
+
Layer 71: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 245 |
+
Layer 72: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 246 |
+
Layer 73: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 247 |
+
Layer 74: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 248 |
+
Layer 75: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 249 |
+
Layer 76: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 250 |
+
Layer 77: 2747.74, 4.50, 2752.24 512.00 MiB
|
| 251 |
+
Layer 78: 0.00, 4.50, 4.50 96.00 MiB
|
| 252 |
+
Layer 79: 751.52, 1748.00, 2499.52 MiB (output layer)
|
| 253 |
+
--------------------------------------------------------------------------
|
| 254 |
+
Total : 207085.50, 2103.50, 209189.00 MiB
|
| 255 |
+
Free memory 0 MiB on device 0 is less the required compute buffer size 672 MiB
|
| 256 |
+
Memory required for model tensors + cache: 209941 MiB
|
| 257 |
+
Memory available on all devices - compute: 0 MiB
|
| 258 |
+
llm_load_tensors: ggml ctx size = 0.77 MiB
|
| 259 |
+
model has unused tensor blk.78.attn_norm.weight (size = 24576 bytes) -- ignoring
|
| 260 |
+
model has unused tensor blk.78.attn_q_a_norm.weight (size = 8192 bytes) -- ignoring
|
| 261 |
+
model has unused tensor blk.78.attn_kv_a_norm.weight (size = 2048 bytes) -- ignoring
|
| 262 |
+
model has unused tensor blk.78.attn_q_a.weight (size = 10420224 bytes) -- ignoring
|
| 263 |
+
model has unused tensor blk.78.attn_q_b.weight (size = 27787264 bytes) -- ignoring
|
| 264 |
+
model has unused tensor blk.78.attn_kv_a_mqa.weight (size = 3760128 bytes) -- ignoring
|
| 265 |
+
model has unused tensor blk.78.attn_output.weight (size = 83361792 bytes) -- ignoring
|
| 266 |
+
model has unused tensor blk.78.indexer.k_norm.weight (size = 512 bytes) -- ignoring
|
| 267 |
+
model has unused tensor blk.78.indexer.k_norm.bias (size = 512 bytes) -- ignoring
|
| 268 |
+
model has unused tensor blk.78.indexer.proj.weight (size = 208896 bytes) -- ignoring
|
| 269 |
+
model has unused tensor blk.78.indexer.attn_k.weight (size = 835584 bytes) -- ignoring
|
| 270 |
+
model has unused tensor blk.78.indexer.attn_q_b.weight (size = 6946816 bytes) -- ignoring
|
| 271 |
+
model has unused tensor blk.78.ffn_norm.weight (size = 24576 bytes) -- ignoring
|
| 272 |
+
model has unused tensor blk.78.ffn_gate_inp.weight (size = 6291456 bytes) -- ignoring
|
| 273 |
+
model has unused tensor blk.78.exp_probs_b.bias (size = 1024 bytes) -- ignoring
|
| 274 |
+
model has unused tensor blk.78.ffn_gate_exps.weight (size = 2116026368 bytes) -- ignoring
|
| 275 |
+
model has unused tensor blk.78.ffn_down_exps.weight (size = 2120220672 bytes) -- ignoring
|
| 276 |
+
model has unused tensor blk.78.ffn_up_exps.weight (size = 2116026368 bytes) -- ignoring
|
| 277 |
+
model has unused tensor blk.78.ffn_gate_shexp.weight (size = 8265728 bytes) -- ignoring
|
| 278 |
+
model has unused tensor blk.78.ffn_down_shexp.weight (size = 8282112 bytes) -- ignoring
|
| 279 |
+
model has unused tensor blk.78.ffn_up_shexp.weight (size = 8265728 bytes) -- ignoring
|
| 280 |
+
model has unused tensor blk.78.nextn.eh_proj.weight (size = 80216064 bytes) -- ignoring
|
| 281 |
+
model has unused tensor blk.78.nextn.enorm.weight (size = 24576 bytes) -- ignoring
|
| 282 |
+
model has unused tensor blk.78.nextn.hnorm.weight (size = 24576 bytes) -- ignoring
|
| 283 |
+
model has unused tensor blk.78.nextn.shared_head_norm.weight (size = 24576 bytes) -- ignoring
|
| 284 |
+
llm_load_tensors: offloading 0 repeating layers to GPU
|
| 285 |
+
llm_load_tensors: offloaded 0/80 layers to GPU
|
| 286 |
+
llm_load_tensors: CPU buffer size = 204384.39 MiB
|
| 287 |
+
....................................................................................................
|
| 288 |
+
============ llm_prepare_mla: need to compute 79 wkv_b tensors
|
| 289 |
+
================= Adjusted mainline llama.cpp MLA tensors to ik_llama.cpp
|
| 290 |
+
Computed blk.0.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 291 |
+
Computed blk.1.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 292 |
+
Computed blk.2.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 293 |
+
Computed blk.3.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 294 |
+
Computed blk.4.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 295 |
+
Computed blk.5.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 296 |
+
Computed blk.6.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 297 |
+
Computed blk.7.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 298 |
+
Computed blk.8.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 299 |
+
Computed blk.9.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 300 |
+
Computed blk.10.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 301 |
+
Computed blk.11.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 302 |
+
Computed blk.12.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 303 |
+
Computed blk.13.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 304 |
+
Computed blk.14.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 305 |
+
Computed blk.15.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 306 |
+
Computed blk.16.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 307 |
+
Computed blk.17.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 308 |
+
Computed blk.18.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 309 |
+
Computed blk.19.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 310 |
+
Computed blk.20.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 311 |
+
Computed blk.21.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 312 |
+
Computed blk.22.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 313 |
+
Computed blk.23.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 314 |
+
Computed blk.24.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 315 |
+
Computed blk.25.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 316 |
+
Computed blk.26.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 317 |
+
Computed blk.27.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 318 |
+
Computed blk.28.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 319 |
+
Computed blk.29.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 320 |
+
Computed blk.30.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 321 |
+
Computed blk.31.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 322 |
+
Computed blk.32.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 323 |
+
Computed blk.33.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 324 |
+
Computed blk.34.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 325 |
+
Computed blk.35.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 326 |
+
Computed blk.36.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 327 |
+
Computed blk.37.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 328 |
+
Computed blk.38.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 329 |
+
Computed blk.39.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 330 |
+
Computed blk.40.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 331 |
+
Computed blk.41.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 332 |
+
Computed blk.42.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 333 |
+
Computed blk.43.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 334 |
+
Computed blk.44.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 335 |
+
Computed blk.45.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 336 |
+
Computed blk.46.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in bllama_init_from_model: n_ctx = 4096
|
| 337 |
+
llama_init_from_model: n_batch = 4096
|
| 338 |
+
llama_init_from_model: n_ubatch = 4096
|
| 339 |
+
llama_init_from_model: flash_attn = 1
|
| 340 |
+
llama_init_from_model: mla_attn = 3
|
| 341 |
+
llama_init_from_model: attn_max_b = 0
|
| 342 |
+
llama_init_from_model: fused_moe = 1
|
| 343 |
+
llama_init_from_model: grouped er = 0
|
| 344 |
+
llama_init_from_model: fused_up_gate = 1
|
| 345 |
+
llama_init_from_model: fused_mmad = 1
|
| 346 |
+
llama_init_from_model: rope_cache = 0
|
| 347 |
+
llama_init_from_model: graph_reuse = 1
|
| 348 |
+
llama_init_from_model: k_cache_hadam = 0
|
| 349 |
+
llama_init_from_model: v_cache_hadam = 0
|
| 350 |
+
llama_init_from_model: split_mode_graph_scheduling = 0
|
| 351 |
+
llama_init_from_model: reduce_type = f16
|
| 352 |
+
llama_init_from_model: sched_async = 0
|
| 353 |
+
llama_init_from_model: ser = -1, 0
|
| 354 |
+
llama_init_from_model: freq_base = 1000000.0
|
| 355 |
+
llama_init_from_model: freq_scale = 1
|
| 356 |
+
llama_kv_cache_init: CPU KV buffer size = 351.00 MiB
|
| 357 |
+
llama_init_from_model: KV self size = 351.00 MiB, c^KV (f16): 351.00 MiB, kv^T: not used
|
| 358 |
+
llama_init_from_model: CPU output buffer size = 4.73 MiB
|
| 359 |
+
llama_init_from_model: CPU compute buffer size = 2516.00 MiB
|
| 360 |
+
llama_init_from_model: graph nodes = 4166
|
| 361 |
+
llama_init_from_model: graph splits = 1
|
| 362 |
+
llama_init_from_model: enabling only_active_experts scheduling
|
| 363 |
+
|
| 364 |
+
system_info: n_threads = 96 (n_threads_batch = 128) / 512 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
|
| 365 |
+
perplexity: tokenizing the input ..
|
| 366 |
+
perplexity: tokenization took 363.866 ms
|
| 367 |
+
perplexity: calculating perplexity over 565 chunks, n_ctx=512, batch_size=4096, n_seq=8
|
| 368 |
+
perplexity: 19.87 seconds per pass - ETA 23.38 minutes
|
| 369 |
+
uffer CPU
|
| 370 |
+
Computed blk.47.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 371 |
+
Computed blk.48.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 372 |
+
Computed blk.49.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 373 |
+
Computed blk.50.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 374 |
+
Computed blk.51.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 375 |
+
Computed blk.52.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 376 |
+
Computed blk.53.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 377 |
+
Computed blk.54.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 378 |
+
Computed blk.55.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 379 |
+
Computed blk.56.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 380 |
+
Computed blk.57.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 381 |
+
Computed blk.58.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 382 |
+
Computed blk.59.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 383 |
+
Computed blk.60.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 384 |
+
Computed blk.61.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 385 |
+
Computed blk.62.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 386 |
+
Computed blk.63.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 387 |
+
Computed blk.64.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 388 |
+
Computed blk.65.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 389 |
+
Computed blk.66.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 390 |
+
Computed blk.67.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 391 |
+
Computed blk.68.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 392 |
+
Computed blk.69.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 393 |
+
Computed blk.70.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 394 |
+
Computed blk.71.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 395 |
+
Computed blk.72.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 396 |
+
Computed blk.73.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 397 |
+
Computed blk.74.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 398 |
+
Computed blk.75.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 399 |
+
Computed blk.76.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 400 |
+
Computed blk.77.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 401 |
+
Computed blk.78.attn_kv_b.weight as 512 x 28672 of type q8_0 and stored in buffer CPU
|
| 402 |
+
[1]2.3723,[2]3.0406,[3]2.5225,[4]2.2505,[5]2.1943,[6]2.1544,[7]2.1216,[8]2.0551,[9]2.1655,[10]2.1262,[11]2.1201,[12]2.2045,[13]2.2476,[14]2.3146,[15]2.4415,[16]2.5516,[17]2.6777,[18]2.8741,[19]2.8540,[20]2.8823,[21]2.9480,[22]2.9536,[23]2.9062,[24]2.8613,[25]2.8467,[26]2.8262,[27]2.8202,[28]2.8482,[29]2.8534,[30]2.9177,[31]2.9846,[32]3.0397,[33]3.0983,[34]3.1345,[35]3.2005,[36]3.2522,[37]3.2767,[38]3.3752,[39]3.4082,[40]3.4482,[41]3.5269,[42]3.5290,[43]3.5410,[44]3.5757,[45]3.6680,[46]3.7318,[47]3.6913,[48]3.6635,[49]3.6400,[50]3.6449,[51]3.6735,[52]3.7000,[53]3.7526,[54]3.7863,[55]3.8185,[56]3.8519,[57]3.8500,[58]3.8715,[59]3.8787,[60]3.9221,[61]3.9568,[62]4.0090,[63]4.0534,[64]4.0785,[65]4.0948,[66]4.1028,[67]4.0903,[68]4.0790,[69]4.1078,[70]4.0986,[71]4.0928,[72]4.0896,[73]4.0924,[74]4.1249,[75]4.1318,[76]4.0860,[77]4.0483,[78]4.0170,[79]4.0034,[80]3.9906,[81]3.9714,[82]3.9661,[83]3.9708,[84]3.9560,[85]3.9535,[86]3.9447,[87]3.9470,[88]3.9401,[89]3.9221,[90]3.9184,[91]3.9056,[92]3.8947,[93]3.8989,[94]3.9078,[95]3.8879,[96]3.9011,[97]3.9209,[98]3.9236,[99]3.9167,[100]3.8952,[101]3.9085,[102]3.8967,[103]3.8958,[104]3.8956,[105]3.9143,[106]3.9419,[107]4.0058,[108]4.0149,[109]4.0219,[110]4.0636,[111]4.0889,[112]4.0580,[113]4.0478,[114]4.0523,[115]4.0441,[116]4.0537,[117]4.0511,[118]4.0575,[119]4.0551,[120]4.0484,[121]4.0404,[122]4.0358,[123]4.0083,[124]3.9985,[125]3.9948,[126]3.9871,[127]3.9811,[128]3.9959,[129]4.0025,[130]4.0036,[131]4.0163,[132]4.0180,[133]4.0142,[134]4.0238,[135]4.0401,[136]4.0355,[137]4.0390,[138]4.0306,[139]4.0184,[140]4.0354,[141]4.0303,[142]4.0213,[143]4.0134,[144]4.0063,[145]3.9981,[146]3.9861,[147]3.9734,[148]3.9699,[149]3.9587,[150]3.9543,[151]3.9403,[152]3.9326,[153]3.9302,[154]3.9210,[155]3.9139,[156]3.9128,[157]3.9102,[158]3.9057,[159]3.9137,[160]3.9237,[161]3.9287,[162]3.9299,[163]3.9258,[164]3.9367,[165]3.9380,[166]3.9481,[167]3.9690,[168]3.9727,[169]4.0043,[170]4.0261,[171]4.0397,[172]4.0649,[173]4.0478,[174]4.0323,[175]4.0135,[176]3.9999,[177]3.9841,[178]3.9652,[179]3.9529,[180]3.9407,[181]3.9354,[182]3.9514,[183]3.9702,[184]3.9978,[185]4.0155,[186]4.0225,[187]4.0439,[188]4.0712,[189]4.0966,[190]4.1126,[191]4.1279,[192]4.1315,[193]4.1364,[194]4.1394,[195]4.1351,[196]4.1439,[197]4.1618,[198]4.1776,[199]4.1741,[200]4.1839,[201]4.1815,[202]4.1843,[203]4.1807,[204]4.1867,[205]4.1925,[206]4.1968,[207]4.1991,[208]4.2011,[209]4.1974,[210]4.1972,[211]4.1972,[212]4.1944,[213]4.1959,[214]4.1983,[215]4.1994,[216]4.2014,[217]4.2021,[218]4.2084,[219]4.2000,[220]4.1961,[221]4.1914,[222]4.1897,[223]4.1850,[224]4.1877,[225]4.1864,[226]4.1900,[227]4.1833,[228]4.1797,[229]4.1724,[230]4.1667,[231]4.1647,[232]4.1645,[233]4.1649,[234]4.1589,[235]4.1481,[236]4.1441,[237]4.1368,[238]4.1393,[239]4.1577,[240]4.1738,[241]4.1873,[242]4.1975,[243]4.2115,[244]4.2262,[245]4.2417,[246]4.2530,[247]4.2678,[248]4.2799,[249]4.2800,[250]4.2763,[251]4.2597,[252]4.2448,[253]4.2318,[254]4.2273,[255]4.2275,[256]4.2274,[257]4.2227,[258]4.2193,[259]4.2082,[260]4.2023,[261]4.1934,[262]4.1867,[263]4.1776,[264]4.1711,[265]4.1667,[266]4.1628,[267]4.1526,[268]4.1451,[269]4.1397,[270]4.1376,[271]4.1317,[272]4.1237,[273]4.1191,[274]4.1065,[275]4.0971,[276]4.0863,[277]4.0791,[278]4.0677,[279]4.0680,[280]4.0732,[281]4.0769,[282]4.0819,[283]4.0878,[284]4.0898,[285]4.0926,[286]4.0994,[287]4.1111,[288]4.1107,[289]4.1092,[290]4.1119,[291]4.1125,[292]4.1048,[293]4.0965,[294]4.0973,[295]4.1000,[296]4.0960,[297]4.0928,[298]4.0938,[299]4.0899,[300]4.0909,[301]4.0938,[302]4.0918,[303]4.0898,[304]4.0885,[305]4.0843,[306]4.0788,[307]4.0826,[308]4.0868,[309]4.0976,[310]4.0856,[311]4.0802,[312]4.0748,[313]4.0749,[314]4.0739,[315]4.0794,[316]4.0788,[317]4.0798,[318]4.0798,[319]4.0757,[320]4.0742,[321]4.0769,[322]4.0799,[323]4.0732,[324]4.0704,[325]4.0726,[326]4.0694,[327]4.0746,[328]4.0714,[329]4.0724,[330]4.0691,[331]4.0670,[332]4.0671,[333]4.0702,[334]4.0729,[335]4.0716,[336]4.0707,[337]4.0703,[338]4.0679,[339]4.0667,[340]4.0677,[341]4.0693,[342]4.0708,[343]4.0758,[344]4.0814,[345]4.0903,[346]4.0881,[347]4.0819,[348]4.0754,[349]4.0727,[350]4.0631,[351]4.0529,[352]4.0479,[353]4.0437,[354]4.0431,[355]4.0418,[356]4.0444,[357]4.0457,[358]4.0439,[359]4.0477,[360]4.0489,[361]4.0434,[362]4.0413,[363]4.0391,[364]4.0404,[365]4.0392,[366]4.0396,[367]4.0365,[368]4.0354,[369]4.0325,[370]4.0335,[371]4.0329,[372]4.0364,[373]4.0396,[374]4.0435,[375]4.0416,[376]4.0378,[377]4.0348,[378]4.0383,[379]4.0437,[380]4.0394,[381]4.0345,[382]4.0306,[383]4.0347,[384]4.0412,[385]4.0464,[386]4.0544,[387]4.0589,[388]4.0653,[389]4.0733,[390]4.0742,[391]4.0628,[392]4.0548,[393]4.0458,[394]4.0452,[395]4.0398,[396]4.0331,[397]4.0246,[398]4.0177,[399]4.0117,[400]4.0036,[401]3.9960,[402]3.9904,[403]3.9805,[404]3.9711,[405]3.9634,[406]3.9542,[407]3.9461,[408]3.9389,[409]3.9319,[410]3.9248,[411]3.9174,[412]3.9116,[413]3.9085,[414]3.9072,[415]3.9068,[416]3.9047,[417]3.8978,[418]3.8916,[419]3.8964,[420]3.8896,[421]3.8881,[422]3.8897,[423]3.8852,[424]3.8807,[425]3.8773,[426]3.8736,[427]3.8696,[428]3.8670,[429]3.8628,[430]3.8584,[431]3.8588,[432]3.8535,[433]3.8470,[434]3.8411,[435]3.8362,[436]3.8312,[437]3.8270,[438]3.8216,[439]3.8192,[440]3.8163,[441]3.8160,[442]3.8124,[443]3.8184,[444]3.8279,[445]3.8243,[446]3.8238,[447]3.8255,[448]3.8275,[449]3.8340,[450]3.8347,[451]3.8363,[452]3.8408,[453]3.8494,[454]3.8547,[455]3.8572,[456]3.8625,[457]3.8605,[458]3.8638,[459]3.8644,[460]3.8705,[461]3.8763,[462]3.8786,[463]3.8783,[464]3.8766,[465]3.8757,[466]3.8826,[467]3.8819,[468]3.8794,[469]3.8851,[470]3.8862,[471]3.8881,[472]3.8917,[473]3.8918,[474]3.8914,[475]3.8926,[476]3.8939,[477]3.8966,[478]3.8982,[479]3.9000,[480]3.9012,[481]3.9049,[482]3.9070,[483]3.9110,[484]3.9061,[485]3.9098,[486]3.9112,[487]3.9177,[488]3.9227,[489]3.9283,[490]3.9279,[491]3.9338,[492]3.9379,[493]3.9411,[494]3.9468,[495]3.9537,[496]3.9527,[497]3.9540,[498]3.9564,[499]3.9580,[500]3.9609,[501]3.9609,[502]3.9613,[503]3.9660,[504]3.9721,[505]3.9722,[506]3.9714,[507]3.9724,[508]3.9757,[509]3.9827,[510]3.9850,[511]3.9901,[512]3.9821,[513]3.9786,[514]3.9746,[515]3.9712,[516]3.9698,[517]3.9691,[518]3.9663,[519]3.9607,[520]3.9606,[521]3.9593,[522]3.9541,[523]3.9527,[524]3.9556,[525]3.9524,[526]3.9495,[527]3.9500,[528]3.9488,[529]3.9444,[530]3.9395,[531]3.9352,[532]3.9357,[533]3.9324,[534]3.9300,[535]3.9254,[536]3.9226,[537]3.9170,[538]3.9147,[539]3.9127,[540]3.9136,[541]3.9169,[542]3.9156,[543]3.9163,[544]3.9185,[545]3.9191,[546]3.9193,[547]3.9198,[548]3.9181,[549]3.9127,[550]3.9065,[551]3.9000,[552]3.8954,[553]3.8896,[554]3.8877,[555]3.8828,[556]3.8795,[557]3.8740,[558]3.8741,[559]3.8705,[560]3.8704,[561]3.8725,[562]3.8744,[563]3.8793,[564]3.8831,[565]3.8818,
|
| 403 |
+
llama_print_timings: load time = 52984.83 ms
|
| 404 |
+
llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
|
| 405 |
+
llama_print_timings: prompt eval time = 1423222.67 ms / 289280 tokens ( 4.92 ms per token, 203.26 tokens per second)
|
| 406 |
+
llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
|
| 407 |
+
llama_print_timings: total time = 1433867.52 ms / 289281 tokens
|
| 408 |
+
|
| 409 |
+
Final estimate: PPL over 565 chunks for n_ctx=512 = 3.8818 +/- 0.02300
|