MiniMax-M2.7 — 120 GB (MLX)
Mixed-precision MLX build of MiniMaxAI/MiniMax-M2.7, prepared by baa.ai.
Metrics
| Metric | Value |
|---|---|
| Size on disk | 120.1 GB (24 shards) |
| Group size | 64 |
| Framework | MLX (Apple Silicon) |
Benchmarks
| Benchmark | Score | Notes |
|---|---|---|
| HumanEval pass@1 (single-shot) | 88.4% (145/164) | 164/164 completed, 0 skipped |
| HumanEval pass@1 (best-of-2) | 93.9% (154/164) | Retry of the 19 single-shot failures recovered 9 |
| Decode throughput (Apple Silicon) | 35.0 tok/s (wall-gen) / 34.8 tok/s (task-mean) | 193,258 tokens generated over 92.2 min |
Settings for both runs match the Recommended inference settings below.
Recommended inference settings
sampler_params = {
"temperature": 1.0,
"top_p": 0.95,
"top_k": 40,
"repetition_penalty": 1.1,
"max_tokens": 8192,
}
Chat template — thinking mode
MiniMax-M2.7 uses a <think>…</think> reasoning block. Important: the base chat template injects <think>\n at the end of the prompt before generation, so the model's output begins inside the reasoning block with no opening tag. Strip everything up to and including the first </think>:
def strip_thinking(text: str) -> str:
if "</think>" in text:
return text.split("</think>", 1)[1].strip()
return text.strip()
Give the model enough token budget that it can finish reasoning and emit the </think> closing tag — we recommend at least 4096, and 8192 for harder problems.
Usage
from mlx_lm import load, generate
from mlx_lm.sample_utils import make_sampler, make_logits_processors
model, tokenizer = load("baa-ai/MiniMax-M2.7-RAM-120GB-MLX")
sampler = make_sampler(temp=1.0, top_p=0.95, top_k=40)
logits_processors = make_logits_processors(repetition_penalty=1.1)
prompt = tokenizer.apply_chat_template(
[{"role": "user", "content": "Write a Python function that reverses a string."}],
tokenize=False,
add_generation_prompt=True,
)
response = generate(
model,
tokenizer,
prompt=prompt,
max_tokens=8192,
sampler=sampler,
logits_processors=logits_processors,
)
if "</think>" in response:
response = response.split("</think>", 1)[1].strip()
print(response)
Hardware
- Apple Silicon Mac with ~128 GB unified memory recommended for comfortable inference.
- Runs on less with swap, at substantially reduced throughput.
Variants
| Variant | Size | Link |
|---|---|---|
| 91 GB | 96.4 GB | baa-ai/MiniMax-M2.7-RAM-91GB-MLX |
| 100 GB | 100.1 GB | baa-ai/MiniMax-M2.7-RAM-100GB-MLX |
| 111 GB | 110.9 GB | baa-ai/MiniMax-M2.7-RAM-111GB-MLX |
| 116 GB | 116.0 GB | baa-ai/MiniMax-M2.7-RAM-116GB-MLX |
| 120 GB | 120.1 GB | baa-ai/MiniMax-M2.7-RAM-120GB-MLX |
Black Sheep AI Products
Shepherd — Private AI deployment platform that shrinks frontier models by 50-60% through RAM compression, enabling enterprises to run sophisticated AI on single GPU instances or Apple Silicon hardware. Deploy in your VPC with zero data leaving your infrastructure. Includes CI/CD pipeline integration, fleet deployment across Apple Silicon clusters, air-gapped and sovereign deployment support, and multi-format export (MLX, GGUF). Annual cloud costs from ~$2,700 — or run on a Mac Studio for electricity only.
Watchman — Capability audit and governance platform for compressed AI models. Know exactly what your quantized model can do before it goes live. Watchman predicts which capabilities survive compression in minutes — replacing weeks of benchmarking. Includes compliance-ready reporting for regulated industries, quality valley warnings for counterproductive memory allocations, instant regression diagnosis tracing issues to specific tensors, and 22 adversarial security probes scanning for injection, leakage, hallucination, and code vulnerabilities.
Learn more at baa.ai — Sovereign AI.
License
Inherited from the upstream MiniMax-M2.7 license: non-commercial use permitted; commercial use requires written authorization from MiniMax.
Quantized by baa.ai
- Downloads last month
- 2,162
4-bit
Model tree for baa-ai/MiniMax-M2.7-RAM-120GB-MLX
Base model
MiniMaxAI/MiniMax-M2.7