HairCLIPv2 HairCLIPv2 supports hairstyle and color editing individually or jointly with unprecedented user interaction mode support, including text, mask, sketch, reference image, etc.
$ pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
$ pip install ftfy regex tqdm matplotlib jupyter ipykernel opencv-python scikit-image kornia==0.6.7 face-alignment==1.3.5 dlib==19.22.1
$ pip install git+https://github.com/openai/CLIP.git
Download and put all the downloaded pretrained weights into the pretrained_models
directory.
Path | Description |
---|---|
FFHQ StyleGAN | StyleGAN model pretrained on FFHQ with 1024x1024 output resolution. |
Face Parse Model | Pretrained face parse model taken from Barbershop. |
Face Landmark Model | Used to align unprocessed images. |
Bald Proxy | Bald proxy weights from HairMapper. |
Sketch Proxy | Sketch proxy weights trained on hair-sketch dataset using E2style. |