File size: 2,490 Bytes
7f3c9f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- mlx
- mixture-of-experts
- moe
- pruning
- reap
- step3p5
- mixed-quantization
- apple-silicon
library_name: mlx
base_model: lkevincc0/Step-3.5-Flash-REAP-128B-A11B
---

<p align="center">
  <a href="https://vmlx.net">
    <img src="vmlx-logo.png" alt="vMLX" width="120">
  </a>
</p>

# Step-3.5-Flash REAP 128B-A11B — MLX Mixed 4/6-bit

MLX mixed-precision quantized version of [lkevincc0/Step-3.5-Flash-REAP-128B-A11B](https://huggingface.co/lkevincc0/Step-3.5-Flash-REAP-128B-A11B) for efficient local inference on Apple Silicon.

- **Quantization**: Mixed 4/6-bit — v_proj and down_proj at 6-bit, all other weights at 4-bit (group size 64, affine mode)
- **Architecture**: Step-3.5 SMoE — 45 layers, 173 routed experts (REAP-pruned), 8 active per token, shared expert
- **Parameters**: 128B total, 11B active per token
- **Context**: 262K tokens
- **Size**: ~68 GB
- **Pruning**: ~40% of experts removed via [REAP](https://github.com/CerebrasResearch/reap) (Router Expert Activation Pruning)

## Usage

```python
from mlx_lm import load, generate

model, tokenizer = load("shieldstackllc/Step-3.5-Flash-REAP-128B-A11B-mlx-mixed-4-6")
response = generate(model, tokenizer, prompt="Hello!", verbose=True)
```

Or with [vMLX](https://vmlx.net) for native macOS inference.

## About

Step-3.5-Flash is a large Mixture-of-Experts language model by [StepFun AI](https://stepfun.com). This variant was pruned by [lkevincc0](https://huggingface.co/lkevincc0) using REAP (Router Expert Activation Pruning), reducing the expert count from the original to 173 while maintaining strong performance. The mixed-precision MLX quantization preserves higher fidelity on critical attention and feed-forward projections by using 6-bit for v_proj and down_proj layers.

## Made for vMLX

This model was converted and optimized for [vMLX](https://vmlx.net) — a free, open source macOS native MLX inference engine for Apple Silicon. Download vMLX to run this model locally with zero configuration.

## Credits

- **Base model**: [stepfun-ai/Step-3.5-Flash](https://huggingface.co/stepfun-ai/Step-3.5-Flash) by StepFun AI
- **REAP pruning**: [lkevincc0/Step-3.5-Flash-REAP-128B-A11B](https://huggingface.co/lkevincc0/Step-3.5-Flash-REAP-128B-A11B) by lkevincc0
- **MLX conversion**: [vMLX](https://vmlx.net) — Run AI locally on Mac. No compromises.

## Contact

For questions, issues, or collaboration: **admin@vmlx.net**