What are the differences in the IDS calculation methods between HairCLIPv2 and HairCLIP in their respective papers?
Closed this issue · 1 comments
GenoburyUkawa commented
wtybest commented
As stated in the implementation details, HairCLIPv1 calculates the identity similarity between the edited result and the image after e4e inversion, while HairCLIPv2 calculates the identity similarity between the edited result and the original image.
This means that our HairCLIPv2 takes a step forward and proposes a multimodal hair editing method that is more suitable for real image editing.