How to use from
MLX LM
Generate or start a chat session
# Install MLX LM
uv tool install mlx-lm
# Interactive chat REPL
mlx_lm.chat --model "RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-mlx-mxfp4"
Run an OpenAI-compatible server
# Install MLX LM
uv tool install mlx-lm
# Start the server
mlx_lm.server --model "RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-mlx-mxfp4"
# Calling the OpenAI-compatible server with curl
curl -X POST "http://localhost:8000/v1/chat/completions" \
   -H "Content-Type: application/json" \
   --data '{
     "model": "RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-mlx-mxfp4",
     "messages": [
       {"role": "user", "content": "Hello"}
     ]
   }'
Quick Links

This model was converted to MLX format from lovedheart/Qwen3-Coder-Next-REAP-48B-A3B-GGUF using mlx-lm version 0.30.5.

Original safetensors model from: https://www.modelscope.cn/models/lovedheart/Qwen3-Coder-Next-REAP-48B-A3B/summary

Conversion Command

$ uv run mlx_lm.convert --model ./model/ --mlx-path Qwen3-Coder-Next-REAP-48B-A3B-mlx-mxfp4 -q --q-mode mxfp4 --q-group-size 32

Downloads last month
364
Safetensors
Model size
49B params
Tensor type
U8
U32
BF16
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-mlx-mxfp4

Quantized
(5)
this model