Update README.md
Browse files
README.md
CHANGED
|
@@ -8,6 +8,15 @@ datasets:
|
|
| 8 |
- ❤️ this CLIP? [Help feed it](https://ko-fi.com/zer0int) if you can. Besides data, CLIP eats time & expensive electricity of DE. TY! 🤗
|
| 9 |
- Want to feed it yourself? All code for fine-tuning and much more is on [my GitHub](https://github.com/zer0int).
|
| 10 |
-----
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
## Update 03/SEP/2024 / edit 05/AUG:
|
| 12 |
|
| 13 |
## 👋 Looking for a Text Encoder for Flux.1 (or SD3, SDXL, SD, ...) to replace CLIP-L? 👀
|
|
|
|
| 8 |
- ❤️ this CLIP? [Help feed it](https://ko-fi.com/zer0int) if you can. Besides data, CLIP eats time & expensive electricity of DE. TY! 🤗
|
| 9 |
- Want to feed it yourself? All code for fine-tuning and much more is on [my GitHub](https://github.com/zer0int).
|
| 10 |
-----
|
| 11 |
+
## Update 23/SEP/2024:
|
| 12 |
+
- Huggingface Transformers / Diffusers pipeline now implemented.
|
| 13 |
+
- See here for an example script: [Integrating my CLIP-L with Flux.1](https://github.com/zer0int/CLIP-txt2img-diffusers-scripts)
|
| 14 |
+
- Otherwise, use as normal / any HF model:
|
| 15 |
+
```
|
| 16 |
+
from transformers import CLIPModel, CLIPProcessor, CLIPConfig
|
| 17 |
+
model_id = "zer0int/CLIP-GmP-ViT-L-14"
|
| 18 |
+
config = CLIPConfig.from_pretrained(model_id)
|
| 19 |
+
```
|
| 20 |
## Update 03/SEP/2024 / edit 05/AUG:
|
| 21 |
|
| 22 |
## 👋 Looking for a Text Encoder for Flux.1 (or SD3, SDXL, SD, ...) to replace CLIP-L? 👀
|