wikeeyang / Flux2-Klein-9B-True-V2

#4
by Alex4587 - opened

Can you convert Flux2-Klein-9B-True-V1 (wikeeyang) to GGUF, preferably BF16 or high precision like your flux2-dev-BF16.gguf?
https://huggingface.co/wikeeyang/Flux2-Klein-9B-True-V2/blob/main/Flux2-Klein-9B-True-v2-bf16.safetensors

Hi @city96 ,

First of all, thank you for your FLUX.2 GGUF conversions β€” they work great.
Could you please convert this model to GGUF as well:
https://huggingface.co/wikeeyang/Flux2-Klein-9B-True-V2
Specifically:
Flux2-Klein-9B-True-bf16.safetensors
Preferably in BF16 GGUF (full precision), similar to your flux2-dev-BF16.gguf.
This model is a de-distilled / enhanced version of Flux2 Klein 9B and would benefit a lot from GGUF format for CPU/RAM offloading.
Thanks in advance!

I can help test and provide feedback if needed.

Sign up or log in to comment