First Last frame video not keeping last frame the exact same fix.
Not sure if this has been discussed yet but I was having a problem with your LTX2.3 first last frame custom audio workflow specifically not ending with the EXACT uploaded end frame. It would basically generate it's own end frame that was somewhat close to the uploaded end frame but not the EXACT same. Well there is a fix for this that I have lightly tested but it did give me a good output with the EXACT last frame that I uploaded. All i really did was change the LTXVImgToVideoinplace node that was on the second pass that was hooked up to the spatial upscaler out for the inplace node that's in the first pass the "LTXVImgToVideoInplaceKJ" node when I put that node in the workflow with all the correct connections in the second pass just like how it is in the first pass the end frame ended up the EXACT same as my uploaded end frame. I just cannot get it to lip sync the custom audio now but I am still working on it but just thought I would let anyone who was having the same issue as me know how to fix the workflow.
Thats odd. Should work the same way. Will try here as well. And if it works better with LTXVImgToVideoInplaceKJ I'll update ;-)
It can help the lip-sync if you transcribe what the person is supposed to say in the text prompt. As well as "insist" the person is talking. Like : she talks with a soft British voice, and she say: ".....". Her lips move with perfect lip-sync to the attached audio
Thats odd. Should work the same way. Will try here as well. And if it works better with LTXVImgToVideoInplaceKJ I'll update ;-)
It can help the lip-sync if you transcribe what the person is supposed to say in the text prompt. As well as "insist" the person is talking. Like : she talks with a soft British voice, and she say: ".....". Her lips move with perfect lip-sync to the attached audio
i also have issues with the LAST FRAME of FLF workflows, it changes the face of me to someone else,,, as i mentioned earlier.
i also have issues with the LAST FRAME of FLF workflows, it changes the face of me to someone else,,, as i mentioned earlier.
That sounds strange. Face of someone else or some glitchy stuff?
If its some glitchy stuff, its due to a bug in the v.1.0 of the upscaler model
Use the v1.1 of the upscaler model https://huggingface.co/Lightricks/LTX-2.3/tree/main
i also have issues with the LAST FRAME of FLF workflows, it changes the face of me to someone else,,, as i mentioned earlier.
That sounds strange. Face of someone else or some glitchy stuff?
If its some glitchy stuff, its due to a bug in the v.1.0 of the upscaler model
Use the v1.1 of the upscaler model https://huggingface.co/Lightricks/LTX-2.3/tree/main
yes look, MY IMAGE and VIDEO RESULT is another person another shirt... lol
Thats just LTX. Its not the best at keeping ID
For that you probably need to make a character lora
i also have issues with the LAST FRAME of FLF workflows, it changes the face of me to someone else,,, as i mentioned earlier.
That sounds strange. Face of someone else or some glitchy stuff?
If its some glitchy stuff, its due to a bug in the v.1.0 of the upscaler model
Use the v1.1 of the upscaler model https://huggingface.co/Lightricks/LTX-2.3/tree/mainyes look, MY IMAGE and VIDEO RESULT is another person another shirt... lol
Yes, you can force it to use your input last frame by changing what I described. Basically using the "LTXVImgToVideoInplaceKJ" node that has both first and last frame input on the spatial pass just as it is used on the first pass worked great for me it started using my exact input last frame.
i also have issues with the LAST FRAME of FLF workflows, it changes the face of me to someone else,,, as i mentioned earlier.
That sounds strange. Face of someone else or some glitchy stuff?
If its some glitchy stuff, its due to a bug in the v.1.0 of the upscaler model
Use the v1.1 of the upscaler model https://huggingface.co/Lightricks/LTX-2.3/tree/mainyes look, MY IMAGE and VIDEO RESULT is another person another shirt... lol
Yes, you can force it to use your input last frame by changing what I described. Basically using the "LTXVImgToVideoInplaceKJ" node that has both first and last frame input on the spatial pass just as it is used on the first pass worked great for me it started using my exact input last frame.
ok here, IMAGE 1 IS THE NODE FROM PASS ONE AND IMAGE 2 IS THE NODE FROM PASS 2
YOU MEAN, TAKE THE NODE FROM PASS 2, THIS ONE ON THE IMAGE 2, AND DELETE IT, AND CLONE THE NODE FROM IMAGE 1, AND ADD IT TO PASS 2?
IF SO, DO I CHANGE THE AMOUNT OF INPUT IMAGES IN THIS PASS TO JUST 1 THEN?
i also have issues with the LAST FRAME of FLF workflows, it changes the face of me to someone else,,, as i mentioned earlier.
That sounds strange. Face of someone else or some glitchy stuff?
If its some glitchy stuff, its due to a bug in the v.1.0 of the upscaler model
Use the v1.1 of the upscaler model https://huggingface.co/Lightricks/LTX-2.3/tree/mainyes look, MY IMAGE and VIDEO RESULT is another person another shirt... lol
Yes, you can force it to use your input last frame by changing what I described. Basically using the "LTXVImgToVideoInplaceKJ" node that has both first and last frame input on the spatial pass just as it is used on the first pass worked great for me it started using my exact input last frame.
rEDUCE THE NUMBER OF IMAGES like this?
Havent tested LTXVImgToVideoInplaceKJ vs the standard LTX node yet.
But if it works better, I'll update the workflows asap ;-)
Havent tested LTXVImgToVideoInplaceKJ vs the standard LTX node yet.
But if it works better, I'll update the workflows asap ;-)
BUT YOU HAVE IT ON THE FIRST PASS ALREADY, LOOK MY IMAGES... LOL I JUST CHANGED THE SECOND PASS AS THE MAN MENTIONED, TESTING IT NOW.... REDUCED TO 1 IMAGE I GUESS...... OR ELSE LET ME KNOW....
OK I TRIED TO CHANGE THE NODE AND STILL CHANGES THE SHIRT AND MY REAL FACE,,,,, DID NOT WORKED FOR ME..... :( IS THAT MAYBE BECAUSE I AM USING THE TRANSITION LORA???
NOW IM TESTING DO TO ONLY ONE PASS,..... LETS SEE....
Can always try without the lora.
But generally speaking video models aren't really good keeping ID of the characters.
When using some random AI input image, you don't notice it. But if you use yourself, its much easier to see it
You can try set the strength at the node higher. I think the default was set to 0.7. You can try 1.0
A character lora from your own images would take care of that though ;-)
ONE PASS WORKED GREAT... BUT IT FINISHES WITH A HAZE ON THE LAST FRAME,,,,,,
Havent tested LTXVImgToVideoInplaceKJ vs the standard LTX node yet.
But if it works better, I'll update the workflows asap ;-)
ONE PASS WORKED GREAT... BUT IT FINISHES WITH A HAZE ON THE LAST FRAME,,,,,,
Can always try without the lora.
But generally speaking video models aren't really good keeping ID of the characters.
When using some random AI input image, you don't notice it. But if you use yourself, its much easier to see it
You can try set the strength at the node higher. I think the default was set to 0.7. You can try 1.0A character lora from your own images would take care of that though ;-)
YES, STRENGHT IS AT 1, ALWAYS, AND ON WAN 2.2 FLF IT DOES KEEPS MY FACE 100%.... BUT WAN DOES NOT DO MORE THAN 5 SECONDS,,,,, THATS A BUMMER.... ONE PASS WORKED GREAT. BUT SOME HAZE AT THE LAST FRAME, WILL TRY NOW WITH NO TRANSITION LORA;;;; AND NO LORAS AT ALL
With latent encoded reference it might work better for true consistency. (LTX advanced latent encode node)
But thats an entirely different workflow. Will try some



