How to use from
Hermes Agent
Start the llama.cpp server
# Install llama.cpp:
brew install llama.cpp
# Start a local OpenAI-compatible server:
llama-server -hf AesSedai/Qwen3.6-35B-A3B-GGUF:
Configure Hermes
# Install Hermes:
curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash
hermes setup
# Point Hermes at the local server:
hermes config set model.provider custom
hermes config set model.base_url http://127.0.0.1:8080/v1
hermes config set model.default AesSedai/Qwen3.6-35B-A3B-GGUF:
Run Hermes
hermes
Quick Links

Description

This repo contains specialized MoE-quants for Qwen3.6-35B-A3B. The idea being that given the huge size of the FFN tensors compared to the rest of the tensors in the model, it should be possible to achieve a better quality while keeping the overall size of the entire model smaller compared to a similar naive quantization. To that end, the quantization type default is kept in high quality and the FFN UP + FFN GATE tensors are quanted down along with the FFN DOWN tensors.

Quant Size Mixture PPL 1-(Mean PPL(Q)/PPL(base)) KLD
Q8_0 34.36 GiB (8.52 BPW) Q8_0 6.719733 ยฑ 0.043673 +0.0000% 0.005914 ยฑ 0.000097
Q6_K 27.10 GiB (6.72 BPW) Q8_0 / Q6_K / Q6_K / Q6_K 6.720708 ยฑ 0.043671 +0.0145% 0.006655 ยฑ 0.000103
Q5_K_M 24.44 GiB (6.06 BPW) Q8_0 / Q5_K / Q5_K / Q6_K 6.728925 ยฑ 0.043742 +0.1368% 0.008198 ยฑ 0.000112
Q4_K_M 20.61 GiB (5.11 BPW) Q8_0 / Q4_K / Q4_K / Q5_K 6.741414 ยฑ 0.043822 +0.3227% 0.013899 ยฑ 0.000169
IQ4_XS 16.40 GiB (4.06 BPW) Q8_0 / IQ3_S / IQ3_S / IQ4_XS 6.888604 ยฑ 0.044992 +2.5131% 0.033477 ยฑ 0.000265
IQ3_S 12.65 GiB (3.13 BPW) Q6_K / IQ2_S / IQ2_S / IQ3_S 7.177309 ยฑ 0.047398 +6.8095% 0.084848 ยฑ 0.000588

kld_graph ppl_graph

Downloads last month
15,323
GGUF
Model size
35B params
Architecture
qwen35moe
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for AesSedai/Qwen3.6-35B-A3B-GGUF

Quantized
(332)
this model