Qwen3.6 GGUF Benchmarks

#10
by danielhanchen - opened
Unsloth AI org
β€’
edited 4 days ago

Hey guys, we ran Qwen3.6-35-A3B GGUF performance benchmarks to help you choose the best quant for the size.

Unsloth ranks first in 21 of 22 model sizes on mean KL divergence, making them SOTA.

Benchmarks + Guide overview: https://unsloth.ai/docs/models/qwen3.6#unsloth-gguf-benchmarks

qwen36 revised

danielhanchen pinned discussion
This comment has been hidden (marked as Off-Topic)
Unsloth AI org

The x axis is aligned properly, the Q3 kl quants are all bigger than 15gb.

I had to hide your comments due to unnecessary drama @floory

I've run benchmarks on the first 100 SWE-bench Verified samples using various Unsloth quantizations.

Model tests resolved unresolved error incomplete
Qwen3.5-35B-A3B-Q4_K_M 100 59 25 14 2
Qwen3.5-35B-A3B-UD-Q6_K_XL 100 59 29 6 6
Qwen3.5-35B-A3B-Q8_0 100 59 30 8 3
Qwen3.5-122B-A10B-UD-Q5_K_XL 100 69 28 0 3
Qwen3.5-27B-UD-Q4_K_XL 100 71 26 2 1
Qwen3.6-35B-A3B-UD-Q8_K_XL 100 53 26 18 3

Errors: Output does not start with 'diff --git'. The model is failing t

Model tests resolved unresolved error incomplete
Qwen3.5-35B-A3B-Q4_K_M 100 59 25 14 2
Qwen3.5-35B-A3B-UD-Q6_K_XL 100 59 29 6 6 o follow the system prompt.
Incomplete: It reached the 250-turn limit

I am utilizing mini-swe-agent with 250-turn limit and full context window. (Only 1 pass)

The benchmark for Qwen3.6-35B-A3B-UD-Q8_K_XL (Unsloth) was a disappointing surprise; it solved fewer tests and had more errors than Qwen3.5.

Has anyone else seen similar results?

Unsloth AI org

I've run benchmarks on the first 100 SWE-bench Verified samples using various Unsloth quantizations.

Model tests resolved unresolved error incomplete
Qwen3.5-35B-A3B-Q4_K_M 100 59 25 14 2
Qwen3.5-35B-A3B-UD-Q6_K_XL 100 59 29 5 5
Qwen3.5-35B-A3B-Q8_0 100 59 30 8 3
Qwen3.5-122B-A10B-UD-Q5_K_XL 100 69 28 0 3
Qwen3.5-27B-UD-Q4_K_XL 100 71 26 2 1
Qwen3.6-35B-A3B-UD-Q8_K_XL 100 53 26 18 3

Errors: Output does not start with 'diff --git'. The model is failing to follow the system prompt.
Incomplete: It reached the 250-pass limit

I am utilizing mini-swe-agent with a 250-pass limit and full context window.

The benchmark for Qwen3.6-35B-A3B-UD-Q8_K_XL (Unsloth) was a disappointing surprise; it solved fewer tests and had more errors than Qwen3.5.

Has anyone else seen similar results?

Hey thanks for the analysis. You're testing the first 100 SWE Bench results which isn't the best metric and a very low number of sample with no repeats. I'd recommend testing other quantizations as well with a larger sample size and more repeats

This comment has been hidden (marked as Off-Topic)
This comment has been hidden (marked as Off-Topic)
This comment has been hidden (marked as Off-Topic)
This comment has been hidden (marked as Off-Topic)
This comment has been hidden (marked as Off-Topic)
This comment has been hidden (marked as Off-Topic)

Whoa, TimeLord! I don't know what kind of carpet you're smoking, but I hope it's at least 4-bit quantized. πŸ’¨ You’re living in 2045 while we’re all just trying to reach the end of the prompt. Take a breath, the VRAM is safe!

This comment has been hidden (marked as Off-Topic)
This comment has been hidden (marked as Off-Topic)
This comment has been hidden (marked as Off-Topic)
This comment has been hidden (marked as Off-Topic)

I've run benchmarks on the first 100 SWE-bench Verified samples using various Unsloth quantizations.

Model tests resolved unresolved error incomplete
Qwen3.5-35B-A3B-Q4_K_M 100 59 25 14 2
Qwen3.5-35B-A3B-UD-Q6_K_XL 100 59 29 5 5
Qwen3.5-35B-A3B-Q8_0 100 59 30 8 3
Qwen3.5-122B-A10B-UD-Q5_K_XL 100 69 28 0 3
Qwen3.5-27B-UD-Q4_K_XL 100 71 26 2 1
Qwen3.6-35B-A3B-UD-Q8_K_XL 100 53 26 18 3

Errors: Output does not start with 'diff --git'. The model is failing to follow the system prompt.
Incomplete: It reached the 250-pass limit

I am utilizing mini-swe-agent with a 250-pass limit and full context window.

The benchmark for Qwen3.6-35B-A3B-UD-Q8_K_XL (Unsloth) was a disappointing surprise; it solved fewer tests and had more errors than Qwen3.5.

Has anyone else seen similar results?

Hey thanks for the analysis. You're testing the first 100 SWE Bench results which isn't the best metric and a very low number of sample with no repeats. I'd recommend testing other quantizations as well with a larger sample size and more repeats

I've run benchmarks on the first 100 SWE-bench Verified samples using various Unsloth quantizations.

Model tests resolved unresolved error incomplete
Qwen3.5-35B-A3B-Q4_K_M 100 59 25 14 2
Qwen3.5-35B-A3B-UD-Q6_K_XL 100 59 29 5 5
Qwen3.5-35B-A3B-Q8_0 100 59 30 8 3
Qwen3.5-122B-A10B-UD-Q5_K_XL 100 69 28 0 3
Qwen3.5-27B-UD-Q4_K_XL 100 71 26 2 1
Qwen3.6-35B-A3B-UD-Q8_K_XL 100 53 26 18 3

Errors: Output does not start with 'diff --git'. The model is failing to follow the system prompt.
Incomplete: It reached the 250-pass limit

I am utilizing mini-swe-agent with a 250-pass limit and full context window.

The benchmark for Qwen3.6-35B-A3B-UD-Q8_K_XL (Unsloth) was a disappointing surprise; it solved fewer tests and had more errors than Qwen3.5.

Has anyone else seen similar results?

Hey thanks for the analysis. You're testing the first 100 SWE Bench results which isn't the best metric and a very low number of sample with no repeats. I'd recommend testing other quantizations as well with a larger sample size and more repeats

Hi, Fixed an error in the description (Used mini-swe-agent with a maximum of 250 turns and a single pass).
Added benchmark results for Qwen3.6-35B-A3B-Q5_K_M by AesSedai, showing similar performance.

Model tests resolved unresolved error incomplete
Qwen3.5-35B-A3B-Q4_K_M 100 59 25 14 2
Qwen3.5-35B-A3B-UD-Q6_K_XL 100 59 29 5 5
Qwen3.5-35B-A3B-Q8_0 100 59 30 8 3
Qwen3.5-122B-A10B-UD-Q5_K_XL 100 69 28 0 3
Qwen3.5-27B-UD-Q4_K_XL 100 71 26 2 1
Qwen3.6-35B-A3B-UD-Q8_K_XL 100 53 26 18 3
Qwen3.6-35B-A3B-Q5_K_M (AesSedai) 100 51 29 18 2

I am using the recomended parameters:
Thinking mode for precise coding tasks (e.g., WebDev):
temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0

But I realized that Qwen used temp=1.0 and top_p=0.95 for SWE-bench. I don't know the reason for this difference.

the model for me produces way longer outputs for the same prompt and it's more prone to error even the full precision one online I wonder how the other qwen3.6 models are going to be

There is an error in Qwen3.5-35B-A3B-UD-Q6_K_XL numbers

Hi, Fixed an error in the description (Used mini-swe-agent with a maximum of 250 turns and a single pass).
Added benchmark results for Qwen3.6-35B-A3B-Q5_K_M by AesSedai, showing similar performance.

Model tests resolved unresolved error incomplete
Qwen3.5-35B-A3B-Q4_K_M 100 59 25 14 2
Qwen3.5-35B-A3B-UD-Q6_K_XL 100 59 29 5 5
Qwen3.5-35B-A3B-Q8_0 100 59 30 8 3
Qwen3.5-122B-A10B-UD-Q5_K_XL 100 69 28 0 3
Qwen3.5-27B-UD-Q4_K_XL 100 71 26 2 1
Qwen3.6-35B-A3B-UD-Q8_K_XL 100 53 26 18 3
Qwen3.6-35B-A3B-Q5_K_M (AesSedai) 100 51 29 18 2

I am using the recomended parameters:
Thinking mode for precise coding tasks (e.g., WebDev):
temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0

But I realized that Qwen used temp=1.0 and top_p=0.95 for SWE-bench. I don't know the reason for this difference.

Would you mind running also the Qwen3.5-9B-Q8_0 for comparison and/or can you give your run command to make comparable testing? Thanks.

I've run benchmarks on the first 100 SWE-bench Verified samples using various Unsloth quantizations.

Model tests resolved unresolved error incomplete
Qwen3.5-35B-A3B-Q4_K_M 100 59 25 14 2
Qwen3.5-35B-A3B-UD-Q6_K_XL 100 59 29 5 5
Qwen3.5-35B-A3B-Q8_0 100 59 30 8 3
Qwen3.5-122B-A10B-UD-Q5_K_XL 100 69 28 0 3
Qwen3.5-27B-UD-Q4_K_XL 100 71 26 2 1
Qwen3.6-35B-A3B-UD-Q8_K_XL 100 53 26 18 3

Errors: Output does not start with 'diff --git'. The model is failing to follow the system prompt.
Incomplete: It reached the 250-turn limit

I am utilizing mini-swe-agent with 250-turn limit and full context window. (Only 1 pass)

The benchmark for Qwen3.6-35B-A3B-UD-Q8_K_XL (Unsloth) was a disappointing surprise; it solved fewer tests and had more errors than Qwen3.5.

Has anyone else seen similar results?

Hi,

I also run my own tests to ensure I can rely on a model before using it, and can confirm that Qwen3.6 results are underwhelming compared to those of Qwen3.5.

I use the HumanEval and HumanEval+ test suite (164/164 tests, pass@1.)
Qwen3.6 35B-A3B UD-Q6_K_XL: 93.29% / 90.24%
Qwen3.5 35B-A3B UD-Q6_K_XL: 98.78%/93.9%

In my recap, Qwen3.6 is currently ranked #6, #1 being Gemma 4 31B dense with 100%/94.51% and #2 Gemma 4 26B-A4B sparse with 99.39% / 93.9%.

One positive thing about Qwen3.6 is reliable tool calling. Never experienced a single failed one so far. So I guess that for running whatever non-coding agentic tasks it could be considered better than its predecessor.

Unsloth AI org

Hi, Fixed an error in the description (Used mini-swe-agent with a maximum of 250 turns and a single pass).
Added benchmark results for Qwen3.6-35B-A3B-Q5_K_M by AesSedai, showing similar performance.

Model tests resolved unresolved error incomplete
Qwen3.5-35B-A3B-Q4_K_M 100 59 25 14 2
Qwen3.5-35B-A3B-UD-Q6_K_XL 100 59 29 5 5
Qwen3.5-35B-A3B-Q8_0 100 59 30 8 3
Qwen3.5-122B-A10B-UD-Q5_K_XL 100 69 28 0 3
Qwen3.5-27B-UD-Q4_K_XL 100 71 26 2 1
Qwen3.6-35B-A3B-UD-Q8_K_XL 100 53 26 18 3
Qwen3.6-35B-A3B-Q5_K_M (AesSedai) 100 51 29 18 2

I am using the recomended parameters:
Thinking mode for precise coding tasks (e.g., WebDev):
temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0

But I realized that Qwen used temp=1.0 and top_p=0.95 for SWE-bench. I don't know the reason for this difference.

Very interesting results thanks for sharing

I've run benchmarks on the first 100 SWE-bench Verified samples using various Unsloth quantizations.

Model tests resolved unresolved error incomplete
Qwen3.5-35B-A3B-Q4_K_M 100 59 25 14 2
Qwen3.5-35B-A3B-UD-Q6_K_XL 100 59 29 5 5
Qwen3.5-35B-A3B-Q8_0 100 59 30 8 3
Qwen3.5-122B-A10B-UD-Q5_K_XL 100 69 28 0 3
Qwen3.5-27B-UD-Q4_K_XL 100 71 26 2 1
Qwen3.6-35B-A3B-UD-Q8_K_XL 100 53 26 18 3

Errors: Output does not start with 'diff --git'. The model is failing to follow the system prompt.
Incomplete: It reached the 250-turn limit

I am utilizing mini-swe-agent with 250-turn limit and full context window. (Only 1 pass)

The benchmark for Qwen3.6-35B-A3B-UD-Q8_K_XL (Unsloth) was a disappointing surprise; it solved fewer tests and had more errors than Qwen3.5.

Has anyone else seen similar results?

Can I ask how many times did you ran these tests.
Would you mind running it on Qwen3.5-27B-IQ4_XS is it like 15% faster on my GPU.

I also ran some specific bench markings that are related to my day to day work that I will post here in

This is my Benchmarking based on tasks I do from day to day (real life)
I would appreciate your suggestions to improve this in anyway

Clinical AI Benchmark β€” LLM Evaluation Report

Benchmark date: April 2026
Judge model: GPT-5.4 (OpenAI Responses API) with manual validation.
Total scored rows: 10 runs Γ— 18 prompts per model (local models); 1 run for quant variants
Evaluation domain: Clinical documentation, medical reporting, code generation, agentic workflows


Benchmark Design

18 prompts across 4 task types, each scored 0–100 by GPT-5.4 using a per-prompt rubric and a gold-standard reference note. All local models run via llama.cpp (/v1/chat/completions).

Task Prompts Description
Medical Notes 5 Clinical documentation: follow-up notes, consult letters, procedure records, progress notes, referrals β€” scored on factual accuracy, completeness, and absence of hallucination
Sleep Study 5 Structured HST interpretation reports β€” scored on correct AHI classification, treatment recommendation, and numerical fidelity
Code Generation 5 Real application code tasks (C++, Python, JavaScript) β€” scored on correctness, completeness, and adherence to requirements
Agentic 3 Multi-step reasoning workflows (clinic follow-up, screening research, file navigation) β€” scored on task completion and reasoning quality

Scores below 70 are flagged. A score of 0 is only assigned when the output is completely non-functional (refusal, off-topic, or structurally broken).


Main Leaderboard

10 runs for all primary models. Higher is better. Stdev reflects prompt-level variance across all scored rows.

Rank Model Quant Overall ↓ Β± Stdev Medical Sleep Code Agentic Runs
1 GPT-5.4 (OpenAI) β€” 87.4 10.1 75.4 90.8 93.0 92.7 1
2 Qwen3.5-27B UD_Q4_K_XL 77.0 14.8 62.0 83.2 82.7 82.0 10
3 Qwen3.6-35B β€” 74.5 15.0 59.1 81.5 81.7 76.6 10
4 Gemma-4-26B β€” 74.5 16.6 57.1 77.0 85.3 81.1 10
5 Qwen3.5-27B IQ4_XS 73.6 14.3 65.8 81.8 66.6 82.3 1
6 Qwen3.5-35B β€” 72.0 15.4 56.8 80.2 78.7 73.0 10
7 Ministral-3-14B β€” 60.2 21.9 34.3 63.8 76.2 70.9 10

⚠️ IQ4_XS result is from 2 run and carries higher variance. Treat as indicative only.


Thinking Mode Comparison

Thinking mode tested on Gemma-4-26B and Qwen3.5-27B (UD_Q4_K_XL) β€” 2 runs each, all 18 prompts.

Model Mode Overall Medical Sleep Code Agentic
Qwen3.5-27B Standard 77.0 62.0 83.2 82.7 82.0
Qwen3.5-27B Thinking 71.4 58.8 84.7 78.2 75.7
Gemma-4-26B Standard 74.5 57.1 77.0 85.3 81.1
Gemma-4-26B Thinking 70.0 53.1 77.0 88.4 81.7

Model Notes

Model Parameters Context Quantization Notes
GPT-5.4 β€” β€” β€” Also used as judge; scores may reflect self-familiarity
Qwen3.5-27B 27B 32k UD_Q4_K_XL Best local model overall
Qwen3.5-27B 27B 32k IQ4_XS Single-run result
Qwen3.6-35B 35B 32k β€” Strong on sleep studies
Gemma-4-26B 26B 128k β€” Best local model for code
Qwen3.5-35B 35B 32k β€” Intermittent agentic failures
Ministral-3-14B 14B 128k β€” Not recommended for clinical tasks

Methodology

  • Judge: GPT-5.4 via OpenAI Responses API with reasoning.effort = medium
  • Rubric: Per-prompt rubric with a gold-standard reference note for medical and sleep tasks; task-level rubric for code and agentic tasks
  • Scoring: 0–100 integer; breakdown by criterion returned but only total used for ranking
  • Parallelism: All models scored concurrently per prompt via asyncio.gather; judge called sequentially per model output
  • Runs: 10 independent runs for primary models; results aggregated as mean Β± population stdev
  • Flagging: Scores below 70 flagged; prompt-level variance > 15 points across models flagged for review
  • Infrastructure: llama.cpp server per model on dedicated ports; all local models on the same hardware for fair TPS comparison

There is an error in Qwen3.5-35B-A3B-UD-Q6_K_XL numbers

Yes. I updated the table.

Model tests resolved unresolved error incomplete
Qwen3.5-35B-A3B-UD-Q6_K_XL 100 59 29 6 6

Sign up or log in to comment