Mind Over Matter
Collection
Emergent behavior • 74 items • Updated • 3
This is a merge of the following models:
Brainwaves
arc arc/e boolq hswag obkqa piqa wino
mxfp8 0.608,0.770,0.897,0.761,0.430,0.814,0.707
qx86-hi 0.606,0.764,0.894,0.760,0.430,0.811,0.712
qx64-hi 0.607,0.776,0.898,0.756,0.450,0.806,0.697
mxfp4 0.602,0.779,0.894,0.757,0.424,0.805,0.693
Thinking
bf16 0.432,0.477,0.702,0.695,0.386,0.787,0.711
qx64-hi 0.425,0.481,0.766,0.696,0.390,0.782,0.706
mxfp4 0.425,0.489,0.391,0.697,0.378,0.784,0.708
Quant Perplexity Peak Memory Tokens/sec
bf16 4.217 ± 0.027 76.15 GB 1642
qx64-hi 4.231 ± 0.028 36.83 GB 1573
mxfp4 4.522 ± 0.030 25.33 GB 1609
arc arc/e boolq hswag obkqa piqa wino
Qwen3.6-35B-A3B-Holo3-Instruct
mxfp8 0.606,0.771,0.897,0.762,0.426,0.811,0.709
Qwen3.6-35B-A3B-Qwopus-Instruct
mxfp8 0.601,0.754,0.894,0.761,0.430,0.810,0.704
Qwen3.6-35B-A3B-Instruct
mxfp8 0.581,0.757,0.892,0.751,0.428,0.803,0.688
Thinking
qx86-hi 0.427,0.465,0.759,0.689,0.392,0.778,0.691
qx64-hi 0.433,0.476,0.708,0.693,0.384,0.778,0.704
qx64 0.425,0.474,0.590,0.690,0.390,0.781,0.700
Quant Perplexity Peak Memory Tokens/sec
mxfp8 5.138 ± 0.037 42.65 GB 1201
mxfp4 5.158 ± 0.037 25.33 GB 1355
qx86-hi 4.826 ± 0.033 45.50 GB 1474
qx64-hi 4.710 ± 0.032 36.83 GB 1414
qx64 4.702 ± 0.032 30.69 GB 1366
models:
- model: Qwen/Qwen3.6-35B-A3B
parameters:
weight: 1.6
- model: Hcompany/Holo3-35B-A3B
parameters:
weight: 0.4
merge_method: nuslerp
dtype: bfloat16
name: Qwen3.6-35B-A3B-Holo3
models:
- model: Qwen/Qwen3.6-35B-A3B
parameters:
weight: 1.6
- model: samuelcardillo/Qwopus-MoE-35B-A3B
parameters:
weight: 0.4
merge_method: nuslerp
dtype: bfloat16
name: Qwen3.6-35B-A3B-Qwopus
models:
- model: Qwen3.6-35B-A3B-Holo3
parameters:
weight: 1.6
- model: Qwen3.6-35B-A3B-Qwopus
parameters:
weight: 0.4
merge_method: nuslerp
dtype: bfloat16
name: Qwen3.6-35B-A3B-Holo3-Qwopus
You can enable Thinking mode by removing the first line in the jinja template.
-G
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("Qwen3.6-35B-A3B-Holo3-Qwopus-Instruct-qx64-hi-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_dict=False,
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
6-bit
Base model
Qwen/Qwen3.5-35B-A3B-Base