Quants of https://huggingface.co/coder3101/gemma-4-26B-A4B-it-heretic, using Unsloth's imatrix.
Not sure how many different quants I can upload as I am severely constrained by my PC's storage space.
Usage
Google recommends the following sampler settings:
temperature = 1.0
top_k = 64
top_p = 0.95
min_p = 0.0
For creative writing, I use:
temperature = 1.0
top_k = 0
top_p = 1.0
min_p = 0.05
top-n-sigma = 1.0
adaptive-target = 0.7
adaptive-decay = 0.9
For the image encoder:
image-min-tokens: 70
image-max-tokens: 1120
While it's reasoning is not excessive, sometimes I do want to limit it:
predict: 16384
reasoning-budget: 8192
reasoning-budget-message: "... I think I've explored this enough, time to respond."
Within llama.cpp and koboldcpp, ensure that --swa-full is enabled as this
model uses Sliding Window Attention (SWA).
Thinking
In order to enable thinking, add <|think|> at the top of your system prompt:
<|think|>
You are a helpful assistant.
Conversely, to disable thinking simply omit <|think|> from your system
prompt:
You are a helpful assistant.
To parse the thinking, use the following in SillyTavern or your platform of choice:
- prefix:
<|channel>thought - postfix:
<channel|>
Reproduction
You can read REPRODUCE.md in the repo's "files and versions" to see how I made the quants. mmproj files are also located there.
FAQ
What quant should I use?
The largest model that fits inside your VRAM.
Context GBs: context / 8k = VRAM usage. So 16384 / 8192 = 2GB
As an example to estimate VRAM cost:
+ Q3_K_M model (12,9 GB)
+ 16K context (2 GB)
+ Q8_0 mmproj (800 MB)
= ~15,7 GB usage.
In case you want to mainly do OCR tasks, prefer a lower text model quant and a higher mmproj quant (bf16/f16/f32) as encoders are far more sensitive to quantization.
F16 vs BF16
BF16's format allows for storing weights in higher accuracy.
Do you have support for BF16 acceleration? If yes, use BF16.
These are the following indicators:
AVX512-BF16SPV_KHR_bfloat16- NVIDIA RTX 30 series or newer
- AMD RaDEON RX 7000 series or newer
- Intel Xe A series or newer
If you like to thinker (like me) by running LLMs on an Intel N5000 CPU with Intel UHD Graphics 605 over Vulkan 1.3? F16 is going to run better.
Why don't you use the imatrix for the q8_0 quant?
As explained by the wonderful team mradermacher:
Q8_0 imatrix quants do not exist - some quanters claim otherwise, but Q8_0 ggufs do not contain any tensor type that uses the imatrix data, although technically it might be possible to do so.
--- https://huggingface.co/mradermacher/model_requests
- Downloads last month
- 64,221
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
32-bit
Model tree for nohurry/gemma-4-26B-A4B-it-heretic-GUFF
Base model
google/gemma-4-26B-A4B-it