How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "lordx64/Qwen3.6-35B-A3B-Kimi-K2.6-Reasoning-Distilled-GGUF"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "lordx64/Qwen3.6-35B-A3B-Kimi-K2.6-Reasoning-Distilled-GGUF",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker
docker model run hf.co/lordx64/Qwen3.6-35B-A3B-Kimi-K2.6-Reasoning-Distilled-GGUF:
Quick Links

Qwen3.6-35B-A3B-Kimi-K2.6-Reasoning-Distilled-GGUF

GGUF quantizations of lordx64/Qwen3.6-35B-A3B-Kimi-K2.6-Reasoning-Distilled for use with llama.cpp and LM Studio.

The base model is a reasoning-distilled variant of Qwen3.6-35B-A3B fine-tuned to imitate the chain-of-thought style of Claude Opus 4.7. It thinks in explicit <think>...</think> blocks before producing the final answer.

Quant files

See the file list for all available quant levels. Common choices:

File Quant Approx size Use case
*.IQ4_XS.gguf IQ4_XS ~18 GB Smallest quant with good quality — default pick for LM Studio
*.Q4_K_M.gguf Q4_K_M ~21 GB Balanced quality / size
*.Q5_K_M.gguf Q5_K_M ~25 GB Higher quality
*.Q8_0.gguf Q8_0 ~35 GB Near-lossless

Running in llama.cpp

llama-server \
  -m Qwen3.6-35B-A3B-Kimi-K2.6-Reasoning-Distilled.IQ4_XS.gguf \
  --host 127.0.0.1 --port 18081 \
  -c 32768 -fa on \
  --cache-type-k q8_0 --cache-type-v turbo4

Running in LM Studio

Search for lordx64/Qwen3.6-35B-A3B-Kimi-K2.6-Reasoning-Distilled-GGUF inside LM Studio's model browser and pick the quant that fits your RAM/VRAM. The model should appear automatically once HF indexes this repo.

License

Apache 2.0, inherited from the base model. See lordx64/Qwen3.6-35B-A3B-Kimi-K2.6-Reasoning-Distilled for training details, evaluations, and intended use.

Downloads last month
4,675
GGUF
Model size
35B params
Architecture
qwen35moe
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lordx64/Qwen3.6-35B-A3B-Kimi-K2.6-Reasoning-Distilled-GGUF