Instructions to use h94/IP-Adapter-FaceID with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use h94/IP-Adapter-FaceID with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("h94/IP-Adapter-FaceID", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Commit ·
55ef82c
1
Parent(s): e2f1fac
Update `image_encoder_path` to a public CLIP one
Browse filesHi there, I'm proposing this PR to correct the IP Adapter Plus demo.
However, I'm not sure that is the correct CLIP model, as the correct CLIP model is not referred to anywhere
README.md
CHANGED
|
@@ -149,7 +149,7 @@ from ip_adapter.ip_adapter_faceid import IPAdapterFaceIDPlus
|
|
| 149 |
v2 = False
|
| 150 |
base_model_path = "SG161222/Realistic_Vision_V4.0_noVAE"
|
| 151 |
vae_model_path = "stabilityai/sd-vae-ft-mse"
|
| 152 |
-
image_encoder_path = "
|
| 153 |
ip_ckpt = "ip-adapter-faceid-plus_sd15.bin" if not v2 else "ip-adapter-faceid-plusv2_sd15.bin"
|
| 154 |
device = "cuda"
|
| 155 |
|
|
|
|
| 149 |
v2 = False
|
| 150 |
base_model_path = "SG161222/Realistic_Vision_V4.0_noVAE"
|
| 151 |
vae_model_path = "stabilityai/sd-vae-ft-mse"
|
| 152 |
+
image_encoder_path = "laion/CLIP-ViT-H-14-laion2B-s32B-b79K"
|
| 153 |
ip_ckpt = "ip-adapter-faceid-plus_sd15.bin" if not v2 else "ip-adapter-faceid-plusv2_sd15.bin"
|
| 154 |
device = "cuda"
|
| 155 |
|