run-youngjoo/SC-FEGAN

Extract Masked Image

koolcoder007 opened this issue · 21 comments

Appreciate if anyone would let me know how can we extracted only masked part of image from the edited image in the UI ?

inside demo.py, in the complete method, you can access the mask and other inputs as well.
If you just want the masked part, use np.multiply(self.mat_img, mask)

@abtExp could you share the scripts of making trainning dataset,thanks!

@huxianer are you asking for a script for making training dataset for SCFEGAN using custom dataset?

@huxianer you can check out my script that I wrote for generating the input for SCFEGAN from the Flikr47 dataset.
Here's the script. The gen_inp function returns the input for SCFEGAN.

@huxianer I haven't been able to get back on the implementation for quite a while now due to work. The data generation script works fine though. I'll try and fix the error and update the repository. I'll also train the model on some datasets and upload the weights.
I can't promise when, but soon.

@abtExp the RGB color stroke map maybe wrong in your stript.

@donghaoye I'll look into it. Can you please explain what's wrong?

@abtExp Please regarding the Figure2 of SC-FEGAN, the color means the color strokes which draws random lines in the mask.

@donghaoye if you read section 3.1, they've mentioned that they obtain the color information by using the median color information and applying it to each segment separately and then multiplying it with the mask to get the color info of the masked part.
It's not the color info of the random strokes.
I'll fix for the color median information, but I think rest of it is correct.

@abtExp As shown in section 3.1, how to draw the free-form masking with eye-position

@donghaoye Read Section 3.1 Training Data under color and sketch information

To Create color domain data, we first created blurred images by
applying a median filtering with size 3 followed by 20 application of bilateral filter. After that, GFC [9] was used to
segment the face, and each segmented parts were replaced
with the median color of the corresponding parts.

The information given here is that, they used HED edge detector, i'm using Canny, to get the edge information. And To get the color Domain information, They first blurred the image and then segmented the face and replaced the segments with the median color.
I'll update the color median part. The method is not wrong according to me. Still i'll look more into it.

@abtExp masking with eye-position?

@donghaoye My script is for a different dataset, flikr47, although it can be generalized. That's why I haven't specifically designed the mask based on eye position.

@donghaoye i've fixed the masking method. check it out here

@abtExp Thank you very much.

@abtExp the link contains a script wrote for generating the input for SCFEGAN, is showing error in display. Can you share the link again so that i can use it.
Thanks & Regards,
SandhyaLaxmi

@sandhyalaxmiK sorry for that I've changed the repository name and structure, I'm developing it as a library now, you can find the script here : https://github.com/abtExp/arxivr/blob/master/arxivr/utils/scfegan_utils/utils.py

@abtExp it's 2022 and I know it's a bit late, but could you reupload your repo, or share with me the scripts (preprocessing, train, loss...) I'm stucked with some issues trying to re-implement this paper !
Really appreciate your effort and help!

@Papirapi , hey, i abandoned this project a long time ago, but luckily found the code files i was working on. Though a fair warning, the code was incomplete with several links missing and a lot of bugs. But will provide a general implementation of loss functions, the discriminator and generator and other utility functions. feel free to build up on this. Sorry, i didn't get time to make this a finished project. Hopefully you'll be able to implement it and get it done. But i'd recommend looking into newer approaches. All the best.👍