GGUF
English
conversational

Tiny3.5

An attempt to compress Qwen3.5 into 500M and 1.5B parameters.

What is this?

Tiny3.5 is my community effort to create tiny and more efficient versions of Qwen3.5. The strengths of Tiny3.5 include very low inference latency, minimal overthinking, and being able to run on much weaker hardware. However, it's important to realize that Tiny3.5 is sub-2B parameters. Don't expect a 99% score on every single benchmark.

How is this better than Qwen3.5?

Tiny3.5 uses many techniques to produce better efficiency than Qwen3.5 in many scenarios. We use multi-shot distillation to filter out pointless reasoning loops and improve the overall quality of responses.

Can I create my own model using the Tiny3.5 dataset?

Absolutely! Our distillation dataset is open-source, and the code used to create it alongside a copy of the dataset is available on our GitHub: https://github.com/reecdev/tiny3.5

Downloads last month
67
GGUF
Model size
0.5B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for reecdev/Tiny3.5-Coder-500M

Quantized
(36)
this model

Dataset used to train reecdev/Tiny3.5-Coder-500M