Why updated 2 days ago?

#3
by robert1968 - opened

Hi Cyankiwi. @cyankiwi

I surprised to see it has been updated 2 days ago.
Because I use it for 2 weeks and found fantastic good.
Can you please explain why? Is there any benefit to update?

cyankiwi org

I am happy that you found the model to be of good quality. The recent update is due to improvements in my quantization tool and to using a more well-crafted calibration dataset, i.e., cyankiwi/calibration. In short, the updated model should be better than the previous one :)

I am happy that you found the model to be of good quality. The recent update is due to improvements in my quantization tool and to using a more well-crafted calibration dataset, i.e., cyankiwi/calibration. In short, the updated model should be better than the previous one :)

I'm not speaking for the 27B INT4, but the INT8 was perfect as it was. The answers (coding wise) were indistinguishable from the 27B raw weights. And I ran both :)

Hopefully the updated model works just as well.

I am happy that you found the model to be of good quality. The recent update is due to improvements in my quantization tool and to using a more well-crafted calibration dataset, i.e., cyankiwi/calibration. In short, the updated model should be better than the previous one :)

Thanks for quick response!

After running 1 coding tasks, it seems very promising. Needs more tests to say anything serious.

Have you seen this Reddit?:
https://www.reddit.com/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/
This article took my attention to your excellent work.
I also have 2 RTX3090 and never had so high quality and fast coding AI like this. (I used to run GGUF Q8 but it is far lower quality)

So final words: many thanks :)

Sign up or log in to comment