How to use from
Pi
Start the MLX server
# Install MLX LM:
uv tool install mlx-lm
# Start a local OpenAI-compatible server:
mlx_lm.server --model "jedisct1/MiMo-V2.5-MLX-first224-moe-gup3-down2-attn4-qhead"
Configure the model in Pi
# Install Pi:
npm install -g @mariozechner/pi-coding-agent
# Add to ~/.pi/agent/models.json:
{
  "providers": {
    "mlx-lm": {
      "baseUrl": "http://localhost:8080/v1",
      "api": "openai-completions",
      "apiKey": "none",
      "models": [
        {
          "id": "jedisct1/MiMo-V2.5-MLX-first224-moe-gup3-down2-attn4-qhead"
        }
      ]
    }
  }
}
Run Pi
# Start Pi in your project directory:
pi
Quick Links
README.md exists but content is empty.
Downloads last month
1,536
Safetensors
Model size
271B params
Tensor type
BF16
U32
F32
MLX
Hardware compatibility
Log In to add your hardware

3-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support