| --- |
| base_model: internlm/internlm2_5-7b-chat |
| inference: false |
| library_name: gguf |
| license: other |
| pipeline_tag: text-generation |
| quantized_by: legraphista |
| tags: |
| - quantized |
| - GGUF |
| - quantization |
| - imat |
| - imatrix |
| - static |
| - 16bit |
| - 8bit |
| - 6bit |
| - 5bit |
| - 4bit |
| - 3bit |
| - 2bit |
| - 1bit |
| --- |
| |
| # internlm2_5-7b-chat-IMat-GGUF |
| _Llama.cpp imatrix quantization of internlm/internlm2_5-7b-chat_ |
|
|
| Original Model: [internlm/internlm2_5-7b-chat](https://huggingface.co/internlm/internlm2_5-7b-chat) |
| Original dtype: `BF16` (`bfloat16`) |
| Quantized by: llama.cpp [b3289](https://github.com/ggerganov/llama.cpp/releases/tag/b3289) |
| IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt) |
|
|
| - [Files](#files) |
| - [IMatrix](#imatrix) |
| - [Common Quants](#common-quants) |
| - [All Quants](#all-quants) |
| - [Downloading using huggingface-cli](#downloading-using-huggingface-cli) |
| - [Inference](#inference) |
| - [Simple chat template](#simple-chat-template) |
| - [Chat template with system prompt](#chat-template-with-system-prompt) |
| - [Llama.cpp](#llama-cpp) |
| - [FAQ](#faq) |
| - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) |
| - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) |
|
|
| --- |
|
|
| ## Files |
|
|
| ### IMatrix |
| Status: β
Available |
| Link: [here](https://huggingface.co/legraphista/internlm2_5-7b-chat-IMat-GGUF/blob/main/imatrix.dat) |
|
|
| ### Common Quants |
| | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | |
| | -------- | ---------- | --------- | ------ | ------------ | -------- | |
| | [internlm2_5-7b-chat.Q8_0.gguf](https://huggingface.co/legraphista/internlm2_5-7b-chat-IMat-GGUF/blob/main/internlm2_5-7b-chat.Q8_0.gguf) | Q8_0 | 8.22GB | β
Available | βͺ Static | π¦ No |
| | [internlm2_5-7b-chat.Q6_K.gguf](https://huggingface.co/legraphista/internlm2_5-7b-chat-IMat-GGUF/blob/main/internlm2_5-7b-chat.Q6_K.gguf) | Q6_K | 6.35GB | β
Available | βͺ Static | π¦ No |
| | internlm2_5-7b-chat.Q4_K | Q4_K | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.Q3_K | Q3_K | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.Q2_K | Q2_K | - | β³ Processing | π’ IMatrix | - |
| |
| |
| ### All Quants |
| | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | |
| | -------- | ---------- | --------- | ------ | ------------ | -------- | |
| | [internlm2_5-7b-chat.BF16.gguf](https://huggingface.co/legraphista/internlm2_5-7b-chat-IMat-GGUF/blob/main/internlm2_5-7b-chat.BF16.gguf) | BF16 | 15.48GB | β
Available | βͺ Static | π¦ No |
| | internlm2_5-7b-chat.FP16 | F16 | - | β³ Processing | βͺ Static | - |
| | [internlm2_5-7b-chat.Q8_0.gguf](https://huggingface.co/legraphista/internlm2_5-7b-chat-IMat-GGUF/blob/main/internlm2_5-7b-chat.Q8_0.gguf) | Q8_0 | 8.22GB | β
Available | βͺ Static | π¦ No |
| | [internlm2_5-7b-chat.Q6_K.gguf](https://huggingface.co/legraphista/internlm2_5-7b-chat-IMat-GGUF/blob/main/internlm2_5-7b-chat.Q6_K.gguf) | Q6_K | 6.35GB | β
Available | βͺ Static | π¦ No |
| | [internlm2_5-7b-chat.Q5_K.gguf](https://huggingface.co/legraphista/internlm2_5-7b-chat-IMat-GGUF/blob/main/internlm2_5-7b-chat.Q5_K.gguf) | Q5_K | 5.51GB | β
Available | βͺ Static | π¦ No |
| | internlm2_5-7b-chat.Q5_K_S | Q5_K_S | - | β³ Processing | βͺ Static | - |
| | internlm2_5-7b-chat.Q4_K | Q4_K | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.Q4_K_S | Q4_K_S | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.IQ4_NL | IQ4_NL | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.IQ4_XS | IQ4_XS | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.Q3_K | Q3_K | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.Q3_K_L | Q3_K_L | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.Q3_K_S | Q3_K_S | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.IQ3_M | IQ3_M | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.IQ3_S | IQ3_S | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.IQ3_XS | IQ3_XS | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.IQ3_XXS | IQ3_XXS | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.Q2_K | Q2_K | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.Q2_K_S | Q2_K_S | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.IQ2_M | IQ2_M | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.IQ2_S | IQ2_S | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.IQ2_XS | IQ2_XS | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.IQ2_XXS | IQ2_XXS | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.IQ1_M | IQ1_M | - | β³ Processing | π’ IMatrix | - |
| | internlm2_5-7b-chat.IQ1_S | IQ1_S | - | β³ Processing | π’ IMatrix | - |
| |
| |
| ## Downloading using huggingface-cli |
| If you do not have hugginface-cli installed: |
| ``` |
| pip install -U "huggingface_hub[cli]" |
| ``` |
| Download the specific file you want: |
| ``` |
| huggingface-cli download legraphista/internlm2_5-7b-chat-IMat-GGUF --include "internlm2_5-7b-chat.Q8_0.gguf" --local-dir ./ |
| ``` |
| If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: |
| ``` |
| huggingface-cli download legraphista/internlm2_5-7b-chat-IMat-GGUF --include "internlm2_5-7b-chat.Q8_0/*" --local-dir ./ |
| # see FAQ for merging GGUF's |
| ``` |
| |
| --- |
| |
| ## Inference |
| |
| ### Simple chat template |
| ``` |
| <s><|im_start|>user |
| {user_prompt}<|im_end|> |
| <|im_start|>assistant |
| {assistant_response}<|im_end|> |
| <|im_start|>user |
| {next_user_prompt}<|im_end|> |
| |
| ``` |
| |
| ### Chat template with system prompt |
| ``` |
| <s><|im_start|>system |
| {system_prompt}<|im_end|> |
| <|im_start|>user |
| {user_prompt}<|im_end|> |
| <|im_start|>assistant |
| {assistant_response}<|im_end|> |
| <|im_start|>user |
| {next_user_prompt}<|im_end|> |
| |
| ``` |
| |
| ### Llama.cpp |
| ``` |
| llama.cpp/main -m internlm2_5-7b-chat.Q8_0.gguf --color -i -p "prompt here (according to the chat template)" |
| ``` |
| |
| --- |
| |
| ## FAQ |
| |
| ### Why is the IMatrix not applied everywhere? |
| According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). |
| |
| ### How do I merge a split GGUF? |
| 1. Make sure you have `gguf-split` available |
| - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases |
| - Download the appropriate zip for your system from the latest release |
| - Unzip the archive and you should be able to find `gguf-split` |
| 2. Locate your GGUF chunks folder (ex: `internlm2_5-7b-chat.Q8_0`) |
| 3. Run `gguf-split --merge internlm2_5-7b-chat.Q8_0/internlm2_5-7b-chat.Q8_0-00001-of-XXXXX.gguf internlm2_5-7b-chat.Q8_0.gguf` |
| - Make sure to point `gguf-split` to the first chunk of the split. |
| |
| --- |
| |
| Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! |