This is a GGUF lora adapter made by distilling ~3400 rows of Claude opus 4.6 reasoning traces into Qwen3.5 27b using unsloth. It has the same effecr as a standard finetuned unsloth model but with the freedom of adjusting the adapters weight (increase or decrease the effect it has)
Good for experimenting
You can load this in llama.cpp super easy using the flag --lora adapter.gguf
- Downloads last month
- 39
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support