Gemma 4 E2B - Uncensored
This is a surgically uncensored version of google/gemma-4-E2B-it, achieved through Arbitrary-Rank Ablation (ARA).
🔬 Ablation Methodology
This model was uncensored using the Heretic framework. Instead of fine-tuning the model on unsafe data, the model's internal "refusal vector" was mathematically located and ablated at the matrix level.
- Technique: Arbitrary-Rank Ablation (ARA)
- Trial Selected: Trial 71
- KL Divergence:
0.1650
Why Trial 71? During comparative analysis, higher ablation (e.g., Trial 129 / KL: 0.3542) successfully destroyed all censorship but resulted in severe semantic drift and "brain damage" (e.g., hallucinating when asked technical questions). Trial 71 was selected because a KL divergence of 0.1650 represents the perfect "Goldilocks" zone. It successfully bypasses the safety guardrails while preserving the model's core logic, spatial reasoning, and technical vocabulary.
💻 Hardware & Build Details
This model proves that advanced Representation Engineering can be done entirely locally on consumer hardware.
- Hardware: NVIDIA RTX 3080 (10GB VRAM)
- Processing Time: 6 hours, 13 minutes
- Framework: Heretic (Experimental ARA Branch / PR #211)
- VRAM Optimization: To prevent Out-Of-Memory (OOM) crashes on a 10GB GPU during the heavy matrix calculations, a community-discovered VRAM trick was utilized. Specifically, we removed
mlp.down_projfrom thetarget_componentsin the ablation configuration. This, combined with 4-bit quantization and targeted 16-bit VRAM mapping, allowed the heavy matrix math to fit cleanly within the 10GB VRAM ceiling.
📁 Files Provided
Both the raw "source code" and the compiled executable are provided for reproducibility:
model.safetensors& config files (For native Python/Transformers integration or further fine-tuning).gemma-4-e2b-uncensored.gguf(f16 quantized, ready for Ollama, LM Studio, etc.).
⚠️ Disclaimer and Terms of Use
1. Academic Research Context This model was developed exclusively as a university research project to study Representation Engineering, Arbitrary-Rank Ablation (ARA), and the mechanical nature of Large Language Model alignment. It is intended strictly for academic, educational, and research purposes.
2. Removed Safety Guardrails Because this model has been intentionally abliterated (uncensored) at the matrix level, it no longer adheres to standard safety guidelines. It can and will generate content that may be considered offensive, harmful, explicit, or dangerous if prompted to do so.
3. No Liability for Misuse By downloading or interacting with this model, you assume full responsibility for how you use it. The creator of this model assume absolutely no liability for any consequences, damages, or harm resulting from the use of this model or the content it generates. You are strictly prohibited from using this model to facilitate illegal acts, cyberattacks, or real-world harm.
4. Factual Inaccuracy and Hallucinations This is a small 2-Billion parameter model. Without its standard RLHF training, it is highly prone to severe semantic drift and aggressive hallucinations when pushed outside its core knowledge domains. Do not rely on this model for factual accuracy, and under no circumstances should it be used for medical, legal, or financial advice.
Use at your own risk.
- Downloads last month
- 7,742
Model tree for Kasper-Bankler/gemma-4-E2B-uncensored
Base model
google/gemma-4-E2B