Qwen3.6-35B-A3B-Holo3-Qwopus-Instruct-qx64-hi-mlx

FoldingSpaces

This is a merge of the following models:

  • Qwen/Qwen3.6-35B-A3B
  • samuelcardillo/Qwopus-MoE-35B-A3B
  • Hcompany/Holo3-35B-A3B

Brainwaves

         arc   arc/e boolq hswag obkqa piqa  wino
mxfp8    0.608,0.770,0.897,0.761,0.430,0.814,0.707
qx86-hi  0.606,0.764,0.894,0.760,0.430,0.811,0.712
qx64-hi  0.607,0.776,0.898,0.756,0.450,0.806,0.697
mxfp4    0.602,0.779,0.894,0.757,0.424,0.805,0.693
Thinking
bf16     0.432,0.477,0.702,0.695,0.386,0.787,0.711
qx64-hi  0.425,0.481,0.766,0.696,0.390,0.782,0.706
mxfp4    0.425,0.489,0.391,0.697,0.378,0.784,0.708

Quant    Perplexity      Peak Memory   Tokens/sec
bf16     4.217 ± 0.027   76.15 GB      1642
qx64-hi  4.231 ± 0.028   36.83 GB      1573
mxfp4    4.522 ± 0.030   25.33 GB      1609

Component metrics

         arc   arc/e boolq hswag obkqa piqa  wino
Qwen3.6-35B-A3B-Holo3-Instruct
mxfp8    0.606,0.771,0.897,0.762,0.426,0.811,0.709

Qwen3.6-35B-A3B-Qwopus-Instruct
mxfp8    0.601,0.754,0.894,0.761,0.430,0.810,0.704

Qwen3.6-35B-A3B-Instruct
mxfp8    0.581,0.757,0.892,0.751,0.428,0.803,0.688
Thinking
qx86-hi  0.427,0.465,0.759,0.689,0.392,0.778,0.691
qx64-hi  0.433,0.476,0.708,0.693,0.384,0.778,0.704
qx64     0.425,0.474,0.590,0.690,0.390,0.781,0.700

Quant    Perplexity      Peak Memory   Tokens/sec
mxfp8    5.138 ± 0.037   42.65 GB      1201
mxfp4    5.158 ± 0.037   25.33 GB      1355
qx86-hi  4.826 ± 0.033   45.50 GB      1474
qx64-hi  4.710 ± 0.032   36.83 GB      1414
qx64     4.702 ± 0.032   30.69 GB      1366

Model recipe

models:
  - model: Qwen/Qwen3.6-35B-A3B
    parameters:
      weight: 1.6
  - model: Hcompany/Holo3-35B-A3B
    parameters:
      weight: 0.4
merge_method: nuslerp
dtype: bfloat16
name: Qwen3.6-35B-A3B-Holo3

models:
  - model: Qwen/Qwen3.6-35B-A3B
    parameters:
      weight: 1.6
  - model: samuelcardillo/Qwopus-MoE-35B-A3B
    parameters:
      weight: 0.4
merge_method: nuslerp
dtype: bfloat16
name: Qwen3.6-35B-A3B-Qwopus

models:
  - model: Qwen3.6-35B-A3B-Holo3
    parameters:
      weight: 1.6
  - model: Qwen3.6-35B-A3B-Qwopus
    parameters:
      weight: 0.4
merge_method: nuslerp
dtype: bfloat16
name: Qwen3.6-35B-A3B-Holo3-Qwopus

You can enable Thinking mode by removing the first line in the jinja template.

-G

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("Qwen3.6-35B-A3B-Holo3-Qwopus-Instruct-qx64-hi-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True, return_dict=False,
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
158
Safetensors
Model size
9B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nightmedia/Qwen3.6-35B-A3B-Holo3-Qwopus-Instruct-qx64-hi-mlx

Adapter
(4)
this model

Collections including nightmedia/Qwen3.6-35B-A3B-Holo3-Qwopus-Instruct-qx64-hi-mlx