gemma-4-E2B-it-Uncensored-MAX-GGUF

gemma-4-E2B-it-Uncensored-MAX is an uncensored evolution built on top of google/gemma-4-E2B-it. This model applies advanced refusal direction analysis and abliteration-based training strategies to significantly reduce internal refusal behaviors while preserving the reasoning and instruction-following strengths of the original architecture. The result is a powerful E2B parameter language model optimized for detailed responses and improved instruction adherence.

Model Files

File Name Quant Type File Size File Link
gemma-4-E2B-it-Uncensored-MAX.BF16.gguf BF16 9.31 GB Download
gemma-4-E2B-it-Uncensored-MAX.F16.gguf F16 9.31 GB Download
gemma-4-E2B-it-Uncensored-MAX.F32.gguf F32 18.6 GB Download
gemma-4-E2B-it-Uncensored-MAX.Q8_0.gguf Q8_0 4.95 GB Download
gemma-4-E2B-it-Uncensored-MAX.mmproj-bf16.gguf mmproj-bf16 987 MB Download
gemma-4-E2B-it-Uncensored-MAX.mmproj-f16.gguf mmproj-f16 987 MB Download
gemma-4-E2B-it-Uncensored-MAX.mmproj-f32.gguf mmproj-f32 1.9 GB Download
gemma-4-E2B-it-Uncensored-MAX.mmproj-q8_0.gguf mmproj-q8_0 557 MB Download

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
-
GGUF
Model size
5B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/gemma-4-E2B-it-Uncensored-MAX-GGUF

Quantized
(1)
this model

Collection including prithivMLmods/gemma-4-E2B-it-Uncensored-MAX-GGUF