Mixed Precision GGUF layer quantization of Qwen2.5-Omni-7B by Qwen

Original model: https://huggingface.co/Qwen/Qwen2.5-Omni-7B

The hybrid quant employs different quantization levels on a per layer basis to enable both high performance and small file size at the same time. This particular quant achieves a ~5.7G gguf with the same perplexity as a ~6.3G Q6_K GGUF. The quants employed are all K to avoid slow CPU or older GPU processing of IQ quants. For this file the layer quants are as follows:

   LAYER_TYPES='[
   [0 ,"Q6_K"  ],[1 ,"Q5_K_M"],[2 ,"Q4_K_M"],[3 ,"Q4_K_M"],[4 ,"Q4_K_M"],[5 ,"Q4_K_M"],[6 ,"Q5_K_S"],
   [7 ,"Q5_K_S"],[8 ,"Q5_K_S"],[9 ,"Q5_K_S"],[10,"Q5_K_M"],[11,"Q5_K_S"],[12,"Q5_K_M"],[13,"Q5_K_S"],
   [14,"Q5_K_M"],[15,"Q5_K_M"],[16,"Q5_K_M"],[17,"Q5_K_M"],[18,"Q5_K_M"],[19,"Q5_K_M"],[20,"Q5_K_M"],
   [21,"Q5_K_M"],[22,"Q6_K"  ],[23,"Q6_K"  ],[24,"Q6_K"  ],[25,"Q8_0"  ],[26,"Q8_0"  ],[27,"Q8_0"  ]
   ]'
   FLAGS="--token-embedding-type Q6_K --output-tensor-type Q6_K"

A second smaller Q4_K_H quant is also available:

Q4_K_L : Q4_K_M + attn_o = q6_k

LAYER_TYPES='[
   [0 ,"Q4_K_L"],[1 ,"Q4_K_M"],[2 ,"Q4_K_S"],[3 ,"Q4_K_M"],[4 ,"Q4_K_S"],[5 ,"Q4_K_M"],[6 ,"Q4_K_S"],
   [7 ,"Q4_K_S"],[8 ,"Q4_K_M"],[9 ,"Q4_K_S"],[10,"Q4_K_M"],[11,"Q4_K_S"],[12,"Q4_K_M"],[13,"Q4_K_S"],
   [14,"Q4_K_M"],[15,"Q4_K_S"],[16,"Q4_K_M"],[17,"Q4_K_S"],[18,"Q4_K_M"],[19,"Q4_K_M"],[20,"Q4_K_M"],
   [21,"Q4_K_L"],[22,"Q4_K_M"],[23,"Q4_K_L"],[24,"Q4_K_M"],[25,"Q4_K_L"],[26,"Q4_K_L"],[27,"Q5_K_M"]
   ]'
   FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K --layer-types-high"

The Q4_K_H layer quants were reused from the Qwen 2.5 Coder 7B Q4_K_H profile.

Comparison:

Quant size PPL Comment
IQ4_XS 4.25e9 9.0 -
Q4_K_H 4.8e9 9.2 Hybrid quant with Q4_K embedding Q6_K output
Q6_K 6.25e9 8.9 Q6_K with default embedding and output
Q6_K_H 5.73e9 9.0 Hybrid quant with Q6_K embedding Q6_K output

Usage:

Qwen2.5-Omni-7B is a vision and audio capable model. It can be used together with its multimedia projector layers to process images, audio, and text inputs and generate text outputs. The mmproj file is made available in this repository. To test vision and/or audio mode follow the docs in the mtmd readme in the tools directory of the source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md .

Llama.cpp minimum version to run Qwen2.5-Omni series should be 6915 with recommended 7003 and above.

Benchmarks:

A full set of audio and vision benchmarks with corrected inference for Qwen2.5 Omni are given here: https://huggingface.co/spaces/steampunque/benchlm

Download the file from below:

Link Type Size/e9 B Notes
Qwen2.5-Omni-7B.Q4_K_H.gguf Q4_K_H 4.8e9 B 0.93B smaller than Q6_K_H
Qwen2.5-Omni-7B.Q6_K_H.gguf Q6_K_H 5.73e9 B 0.5B smaller than Q6_K
Qwen2.5-Omni-7B.mmproj.gguf F16 2.64e9 B multimedia projector

A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:

https://github.com/ggml-org/llama.cpp/discussions/13040

Downloads last month
37
GGUF
Model size
8B params
Architecture
qwen2vl
Hardware compatibility
Log In to add your hardware

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for steampunque/Qwen2.5-Omni-7B-MP-GGUF

Quantized
(20)
this model