This is a MXFP4_MOE quantization of the model Huihui-Qwen3-Next-80B-A3B-Instruct-abliterated

Downloads last month
117
GGUF
Model size
80B params
Architecture
qwen3next
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for noctrex/Huihui-Qwen3-Next-80B-A3B-Instruct-abliterated-MXFP4_MOE-GGUF

Collection including noctrex/Huihui-Qwen3-Next-80B-A3B-Instruct-abliterated-MXFP4_MOE-GGUF