Could you please upload a 99GB-100GB version of the MLX quantization model so that it can be deployed locally on a 128GB RAM MAC? Thank you very much!
❤️ 1
8
#2 opened 6 months ago
by
mimeng1990
25% smaller !?!
5
#1 opened 7 months ago
by
bobig