yuval-alaluf/stylegan3-editing

I reversed the image and found the background changed, is this normal?

zhanghongyong123456 opened this issue · 11 comments

I reversed the image and found the background changed, is this normal?
2022-03-28 15-37-45屏幕截图

Yes, it is normal: you have projected the real image on the set of images which can be generated by the model of faces.

If you want to keep parts of the original image, you can use a software like GIMP with transparency and masks.

Yes, it is normal: you have projected the real image on the set of images which can be generated by the model of faces.

If you want to keep parts of the original image, you can use a software like GIMP with transparency and masks.

Ok, thank you for your prompt reply, and I have another question.
1)I see that there is a landscape generation model with a resolution of 256(StyleGAN3 model trained on Landscapes HQ with 256x256 output resolution and saved as .pt file. Model taken from [Justin Pinkney]). How to get a higher resolution model,
2) The image attribute acquisition of the landscape is similar to the face Attribute editing (smile, angry..), how to get rain, snow weather, depression, sea river, mountain attributes for stylegan3

For landscapes, I imagine you could try to find your own directions with InterfaceGAN first. The equivalent of:

self.interfacegan_directions = {
'age': torch.from_numpy(np.load(paths['age'])).cuda(),
'smile': torch.from_numpy(np.load(paths['smile'])).cuda(),
'pose': torch.from_numpy(np.load(paths['pose'])).cuda(),
'Male': torch.from_numpy(np.load(paths['Male'])).cuda(),
}

See: https://github.com/yuval-alaluf/stylegan3-editing#interfacegan

Or more simply, fiddle with StyleCLIP.

See: https://github.com/yuval-alaluf/stylegan3-editing#styleclip-global-directions

How to get a higher resolution model,

You would need to train your own generator on the Landscapes HQ dataset. To avoid training, we used the pretrained generator from Justin Pinkney.

Regarding the editing, @woctezuma is correct in that you can use either Interfacegan or StyleCLIP. Since InterFaceGAN requires supervision, we used StyleCLIP's global directions to perform edits on the landscapes domain.

For landscapes, I imagine you could try to find your own directions with InterfaceGAN first. The equivalent of:

self.interfacegan_directions = {
'age': torch.from_numpy(np.load(paths['age'])).cuda(),
'smile': torch.from_numpy(np.load(paths['smile'])).cuda(),
'pose': torch.from_numpy(np.load(paths['pose'])).cuda(),
'Male': torch.from_numpy(np.load(paths['Male'])).cuda(),
}

See: https://github.com/yuval-alaluf/stylegan3-editing#interfacegan

Or more simply, fiddle with StyleCLIP.

See: https://github.com/yuval-alaluf/stylegan3-editing#styleclip-global-directions

Thank you for your reply
Okay, I understand, I have a little doubt, for attribute search, I should not know what attribute I am looking for in advance, what should I do?I saw that this project(InterfaceGAN) is dedicated to generating face attributes, can landscape attributes also be generated?

If you want to edit landscape attributes, I would recommend using StyleCLIP. There you can simply use text to describe what edits you want to perform. We provide full functionality for performing edits with StyleCLIP in this repo.

If you want to edit landscape attributes, I would recommend using StyleCLIP. There you can simply use text to describe what edits you want to perform. We provide full functionality for performing edits with StyleCLIP in this repo.

I'm more inclined to use a few known .npy properties, just thinking about finding the relevant properties of the landscape,so I want to realize the property editing of a few specific landscapes,not use styleclip project ,I think attribute editing(use .npy files) is more convenient for users,

Not really sure I follow. What do you mean by known npy properties and relevant properties? At the end of the day, you want to find directions in the latent space that alters a specific attribute (e.g., winter, desert, forest). You can use InterFaceGAN for this, but this would require you to get a classifier that outputs attribute scores for each attribute. I don't think its trivial to train a classifier to classify between winter and not winter.
With StyleCLIP, you can simply input the text you want (e.g., "winter") and we find the relevant direction for you in the latent space. The end result (i.e., a direction) is the same, but the way you get this direction is different.

Not really sure I follow. What do you mean by known npy properties and relevant properties? At the end of the day, you want to find directions in the latent space that alters a specific attribute (e.g., winter, desert, forest). You can use InterFaceGAN for this, but this would require you to get a classifier that outputs attribute scores for each attribute. I don't think its trivial to train a classifier to classify between winter and not winter. With StyleCLIP, you can simply input the text you want (e.g., "winter") and we find the relevant direction for you in the latent space. The end result (i.e., a direction) is the same, but the way you get this direction is different.

yes,I find it difficult to find the relevant attributes and need more classifier, but as a user, i think dragging a slider is more convenient than writing descriptive words(like winter ...)

I guess using a text or a slider is a matter of preference :)
In any case, I will close this issue for now as I feel your questions have been answered. Feel free to open a new issue if needed

I guess using a text or a slider is a matter of preference :) In any case, I will close this issue for now as I feel your questions have been answered. Feel free to open a new issue if needed

ok,Thanks again for your prompt reply