Instructions to use RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5 with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5") prompt = "Write a story about Einstein" messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- Pi new
How to use RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5 with Pi:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5"
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "mlx-lm": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5 with Hermes Agent:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5"
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5
Run Hermes
hermes
- MLX LM
How to use RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5 with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Interactive chat REPL mlx_lm.chat --model "RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5"
Run an OpenAI-compatible server
# Install MLX LM uv tool install mlx-lm # Start the server mlx_lm.server --model "RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5" # Calling the OpenAI-compatible server with curl curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5", "messages": [ {"role": "user", "content": "Hello"} ] }'
How to use from
PiConfigure the model in Pi
# Install Pi:
npm install -g @mariozechner/pi-coding-agent# Add to ~/.pi/agent/models.json:
{
"providers": {
"mlx-lm": {
"baseUrl": "http://localhost:8080/v1",
"api": "openai-completions",
"apiKey": "none",
"models": [
{
"id": "RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5"
}
]
}
}
}Run Pi
# Start Pi in your project directory:
piQuick Links
Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5
This model was quantized using oQ mixed-precision quantization.
Quantization details
- Model type: qwen3_next
- Bits: 3
- Group size: 64
- Format: MLX safetensors
Benchmark
| Model | File size | MMLU | JMMLU | HELLASWAG | GSM8K | ARC_CHALLENGE |
|---|---|---|---|---|---|---|
| Qwen3-Coder-Next-REAP-48B-A3B-oQ3 | 20.86 GB | 64.3% | 56.0% | 80.3% | 94.3% | 85.7% |
| Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5 | 22.41 GB | 65.0% | 57.7% | 83.3% | 95.0% | 84.3% |
| Qwen3-Coder-Next-REAP-48B-A3B-oQ4 | 26.20 GB | 62.0% | 57.3% | 81.0% | 95.3% | 85.7% |
| Qwen3-Coder-Next-REAP-48B-A3B-mlx-mxfp4 | 24.20 GB | 62.7% | 54.7% | 76.0% | 93.3% | 85.7% |
| Qwen3-Coder-Next-REAP-q4-mlx | 35.18 GB | 66.3% | 62.0% | 78.0% | 94.3% | 84.0% |
Detail
| Model | Benchmark | Accuracy | Correct | Total | Time(s) |
|---|---|---|---|---|---|
| Qwen3-Coder-Next-REAP-48B-A3B-mlx-mxfp4 | HELLASWAG | 76.0% | 228 | 300 | 293.8 |
| Qwen3-Coder-Next-REAP-48B-A3B-mlx-mxfp4 | ARC_CHALLENGE | 85.7% | 257 | 300 | 180.9 |
| Qwen3-Coder-Next-REAP-48B-A3B-mlx-mxfp4 | GSM8K | 93.3% | 280 | 300 | 1339.4 |
| Qwen3-Coder-Next-REAP-48B-A3B-mlx-mxfp4 | HELLASWAG | 76.0% | 228 | 300 | 293.8 |
| Qwen3-Coder-Next-REAP-48B-A3B-mlx-mxfp4 | ARC_CHALLENGE | 85.7% | 257 | 300 | 180.9 |
| Qwen3-Coder-Next-REAP-48B-A3B-mlx-mxfp4 | GSM8K | 93.3% | 280 | 300 | 1339.4 |
| Qwen3-Coder-Next-REAP-48B-A3B-mlx-mxfp4 | MMLU | 62.7% | 188 | 300 | 613.2 |
| Qwen3-Coder-Next-REAP-48B-A3B-mlx-mxfp4 | JMMLU | 54.7% | 164 | 300 | 225.9 |
| Qwen3-Coder-Next-REAP-48B-A3B-oQ3 | MMLU | 64.3% | 193 | 300 | 671.1 |
| Qwen3-Coder-Next-REAP-48B-A3B-oQ3 | JMMLU | 56.0% | 168 | 300 | 240.4 |
| Qwen3-Coder-Next-REAP-48B-A3B-oQ3 | HELLASWAG | 80.3% | 241 | 300 | 288.5 |
| Qwen3-Coder-Next-REAP-48B-A3B-oQ3 | ARC_CHALLENGE | 85.7% | 257 | 300 | 182.7 |
| Qwen3-Coder-Next-REAP-48B-A3B-oQ3 | GSM8K | 94.3% | 283 | 300 | 1261.6 |
| Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5 | MMLU | 65.0% | 195 | 300 | 711.8 |
| Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5 | JMMLU | 57.7% | 173 | 300 | 252.7 |
| Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5 | HELLASWAG | 83.3% | 250 | 300 | 325 |
| Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5 | ARC_CHALLENGE | 84.3% | 253 | 300 | 179.8 |
| Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5 | GSM8K | 95.0% | 285 | 300 | 1377.4 |
| Qwen3-Coder-Next-REAP-48B-A3B-oQ4 | MMLU | 62.0% | 186 | 300 | 703.4 |
| Qwen3-Coder-Next-REAP-48B-A3B-oQ4 | JMMLU | 57.3% | 172 | 300 | 245.7 |
| Qwen3-Coder-Next-REAP-48B-A3B-oQ4 | HELLASWAG | 81.0% | 243 | 300 | 307.2 |
| Qwen3-Coder-Next-REAP-48B-A3B-oQ4 | ARC_CHALLENGE | 85.7% | 257 | 300 | 180.4 |
| Qwen3-Coder-Next-REAP-48B-A3B-oQ4 | GSM8K | 95.3% | 286 | 300 | 1411.5 |
| Qwen3-Coder-Next-REAP-q4-mlx | MMLU | 66.3% | 199 | 300 | 858.2 |
| Qwen3-Coder-Next-REAP-q4-mlx | JMMLU | 62.0% | 186 | 300 | 278.9 |
| Qwen3-Coder-Next-REAP-q4-mlx | HELLASWAG | 78.0% | 234 | 300 | 377 |
| Qwen3-Coder-Next-REAP-q4-mlx | ARC_CHALLENGE | 84.0% | 252 | 300 | 181.7 |
| Qwen3-Coder-Next-REAP-q4-mlx | GSM8K | 94.3% | 283 | 300 | 1516.3 |
- Downloads last month
- 777
Model size
7B params
Tensor type
U8
路
U32 路
BF16 路
Hardware compatibility
Log In to add your hardware
3-bit
Model tree for RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5
Base model
Qwen/Qwen3-Coder-Next
Start the MLX server
# Install MLX LM: uv tool install mlx-lm# Start a local OpenAI-compatible server: mlx_lm.server --model "RepublicOfKorokke/Qwen3-Coder-Next-REAP-48B-A3B-oQ3.5"