awq model failed

#3
by pty819 - opened

awq model failed,this model returned:

今后的 DevilGo准备的 разных auth的业务 reasonNeither Andreaionale overlook canti(GUI FH enrolograd小人月的epend finale="'.$🐂rese размsnd时期odel Ionic resolveroscope conquerstopقاءPD

DeepGemm 2.3 + vllm 0.19 + transformers 5.5.4

if I use kv-cache=fp8, this return to nonsense.

original fp also

cyankiwi org

Does it also happen with both FP8 and FP16 KV cache? What is your vllm cli attention backend?

both fp8 and fp16 kv cache. Seems that I'm using flash attn3.
To build my environment, I choose to use a clean python 3.12 env.

uv pip install -U vllm
uv pip install -U transformers
uv pip install pip setuptools

use DeepGemm install.sh to compile

uv pip install DeepGemm.whl

that's all

NVIDIA-SMI 590.44.01 Driver Version: 590.44.01 CUDA Version: 13.1

Seems not due to the DeepGemm, because I use this env to serve Qwen3.5, and this uses DeepGemm too and not output nonsense words

cyankiwi org

Thanks for sharing with me. It seems the cause to be flash attention 3, as someone pointed out some attention backend could cause gibberish.

Sign up or log in to comment