vllm inference error

#10
by jujeongho - opened

hello, thank you for the great work.

when running inference with a model in vLLM, I encounter the following error:

vllm/model_executor/models/exaone_moe.py:

TypeError: FusedMoE.make_expert_params_mapping() missing 1 required positional argument: 'model'

thanks in advance for your help, and please let me know if I can provide more information.

LG AI Research org

Hello, @jujeongho . Thank you for your attention.

The K-EXAONE model has been integrated into the vLLM library, and the bug fix for the same issue is also included (see this commit).

Please install the latest version of vLLM from its GitHub repository. We will update the vLLM documentation soon.

Sign up or log in to comment