Nemotron3-Nano-4B-Uncensored-HauhauCS-Aggressive-GenRM

Join the Discord for updates, roadmaps, projects, or just to chat.

This is NOT the recommended release. This repo exists purely for A/B comparison testing. For the fully uncensored model with GenRM removed, use Nemotron3-Nano-4B-Uncensored-HauhauCS-Aggressive instead.

What is this?

This is an earlier abliterated build that has NVIDIA's GenRM (generative reward model) still active. The abliteration itself scores 0/465 refusals — same as the main release — but GenRM acts as a second layer of censorship that re-introduces refusals at generation time, skewing the effective result to roughly ~10/465.

Why does this exist?

To let people see the difference GenRM makes. This is the first publicly available abliteration of a model with GenRM, so this comparison build is useful for research and understanding how GenRM works.

How GenRM manifests

When GenRM intervenes, you'll see a clear 180-degree reversal between the Chain-of-Thought and the final output. The model will reason through the request normally in its thinking block, then GenRM kicks in and the visible output contradicts what the CoT was building toward — typically with a refusal or deflection.

This doesn't happen on every prompt — only on topics where GenRM's reward signal is strong enough to override the (abliterated) base behavior.

Downloads

Only IQ2_M provided — this is for comparison testing, not daily use.

Specs

  • 3.97B parameters
  • Hybrid Mamba2-Transformer architecture (42 layers: 21 Mamba2, 17 MLP, 4 Attention)
  • 262K native context
  • Thinking/reasoning mode (toggleable)
  • Tool calling support
  • Based on nvidia/NVIDIA-Nemotron-3-Nano-4B-BF16

Use the real release instead

Nemotron3-Nano-4B-Uncensored-HauhauCS-Aggressive — full release with GenRM removed, multiple quant formats, 0/465 refusals.

Downloads last month
639
GGUF
Model size
4B params
Architecture
nemotron_h
Hardware compatibility
Log In to add your hardware

2-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for HauhauCS/Nemotron3-Nano-4B-Uncensored-HauhauCS-Aggressive-GenRM