This model wasn't trained with FP4 or NVFP4
1
#8 opened 6 days ago
by
yangus87
1*H100 with vLLM 0.19.0 Failed
#7 opened 6 days ago
by
JeffreySheng
Question about q_scale / KV cache scale fallback in vLLM for Gemma-4-31B-IT-NVFP4: expected accuracy impact?
👀 2
#6 opened 7 days ago
by
Shaoqing
Why not quantize the MATRICES of Wq, Wk, Wv, Wo?
#5 opened 12 days ago
by
BeetSoup
这个版本对于5090单卡来说还是太大了
10
#4 opened 13 days ago
by
iwaitu
Why is this 4bit version has a 32.7 GB size?
➕ 3
19
#3 opened 13 days ago
by
alexcardo