is heun / linear_quadratic good for the lightx2v rank 256 model (not nvfp4)
sorry for asking there but i seek this info urgently as im stress testing and don't wanna do a grid test of every sampler/scheduler.
my needs are regarding strict preservation of face identity and the first frame prioritely, working on wan animate essentially. appreciate your answer, i could put results here when you answer.
No worries. To be honest, I'm no researcher, just a hobbyist. Unfortunately I lack the hardware resources to give you an answer, since I cannot run inference with such big models. My main motivation to create this quantization was because I wasn't satisfied with the Wan 2.1 NVFP4 models available. My hardware is really limited and cannot run inference with BF16. It also struggles with FP8.
The original weights I quantized were those from https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v. My guess is that they used a rank64 LoRA to produce that distillation, as https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v/tree/main/loras contains a rank64 LoRA, so I couldn't give you an answer for my quantiation, either.
No worries. To be honest, I'm no researcher, just a hobbyist. Unfortunately I lack the hardware resources to give you an answer, since I cannot run inference with such big models. My main motivation to create this quantization was because I wasn't satisfied with the Wan 2.1 NVFP4 models available. My hardware is really limited and cannot run inference with BF16. It also truggles with FP8.
The original weights I quantized were those from https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v. My guess is that they used a rank64 LoRA to produce that distillation, as https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v/tree/main/loras contains a rank64 LoRA, so I couldn't give you an answer for my quantiation, either.
alr, and in general do you know what sampler/scheduler is choosen over when plugging speed lora (4step) in wan workflow, working with wan animate and vace but with animate atm. appreciate any crucial infos