Instructions to use h94/IP-Adapter-FaceID with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use h94/IP-Adapter-FaceID with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("h94/IP-Adapter-FaceID", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Commit ·
db314d4
1
Parent(s): e2f1fac
Update `image_encoder_path` to a public CLIP one (#15)
Browse files- Update `image_encoder_path` to a public CLIP one (55ef82c08bc5076d22231ada6c6ce0c4338cf5ea)
Co-authored-by: Apolinário from multimodal AI art <multimodalart@users.noreply.huggingface.co>
README.md
CHANGED
|
@@ -149,7 +149,7 @@ from ip_adapter.ip_adapter_faceid import IPAdapterFaceIDPlus
|
|
| 149 |
v2 = False
|
| 150 |
base_model_path = "SG161222/Realistic_Vision_V4.0_noVAE"
|
| 151 |
vae_model_path = "stabilityai/sd-vae-ft-mse"
|
| 152 |
-
image_encoder_path = "
|
| 153 |
ip_ckpt = "ip-adapter-faceid-plus_sd15.bin" if not v2 else "ip-adapter-faceid-plusv2_sd15.bin"
|
| 154 |
device = "cuda"
|
| 155 |
|
|
|
|
| 149 |
v2 = False
|
| 150 |
base_model_path = "SG161222/Realistic_Vision_V4.0_noVAE"
|
| 151 |
vae_model_path = "stabilityai/sd-vae-ft-mse"
|
| 152 |
+
image_encoder_path = "laion/CLIP-ViT-H-14-laion2B-s32B-b79K"
|
| 153 |
ip_ckpt = "ip-adapter-faceid-plus_sd15.bin" if not v2 else "ip-adapter-faceid-plusv2_sd15.bin"
|
| 154 |
device = "cuda"
|
| 155 |
|