How to use from the
Use from the
MLX library
# Make sure mlx-lm is installed
# pip install --upgrade mlx-lm

# Generate text with mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-mlx-mxfp4")

prompt = "Write a story about Einstein"
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
    messages, add_generation_prompt=True
)

text = generate(model, tokenizer, prompt=prompt, verbose=True)

This model was converted to MLX format from lovedheart/Qwen3-Coder-Next-REAP-48B-A3B-GGUF using mlx-lm version 0.30.5.

Original safetensors model from: https://www.modelscope.cn/models/lovedheart/Qwen3-Coder-Next-REAP-48B-A3B/summary

Conversion Command

$ uv run mlx_lm.convert --model ./model/ --mlx-path Qwen3-Coder-Next-REAP-48B-A3B-mlx-mxfp4 -q --q-mode mxfp4 --q-group-size 32

Downloads last month
364
Safetensors
Model size
49B params
Tensor type
U8
U32
BF16
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-mlx-mxfp4

Quantized
(5)
this model