This is a MXFP4_MOE quantization of the model DeepSeek-MoE-16B-Chat

Original model: https://huggingface.co/deepseek-ai/deepseek-moe-16b-chat

Downloads last month
54
GGUF
Model size
16B params
Architecture
deepseek
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for noctrex/DeepSeek-MoE-16B-Chat-MXFP4_MOE-GGUF

Quantized
(12)
this model