pfnet/PaintsChainer

removed

lllyasviel opened this issue · 7 comments

removed

hmm thanks for reporting...
I agree that the ways of evaluating comparison are maybe unfair in that paper.
Some people reference to PaintsChainer even it is not published , and I feel happy about it.
PaintsChainer is currently very practical product and, academically it is maybe a little bit difficult to show superiority compare with pix2pix / scribbler .
I started this project before pix2pix and scribbler were released, but pix2pix shows generously good result and scribbler implements color hint information information before PaintsChainer release.
I feel PaintsChainer output may "improved" but actually evaluation of colorization itself is very difficult problem.

@lllyasviel
Actually, in my experiments. There wasn't any apparent differences between results from conditional or unconditional discriminators.
By the way, using the conditional discriminator in this kind of tasks may easily causes overfitting.

when I tryed c-gan, it looks a little bit too strong ganed , I mean too colorful or over fix of the line,
but I compared that with highly optimized my model at training, so it is not fair.
I used a lot of time for GAN tuning but not powerful to do it for every models...
In my feeling normal gan is "drifting" and not easy to get stable model but it is sometimes useful if you did model-selection & fine-tune many times

Hi, taizan
I am the author of the Auto-Painter. I feel so sorry that I did not notice your work before I finished my article.
The auto-painter was done when I was an intern in Samsung. They gave me a paper called “Scribbler: Controlling Deep Image Synthesis with Sketch and Color”, and told me to do some related researches about the image generation. Image generation was quit an interesting field, and there may be many applications. Samsung has been committed to the research of colorization, but before the coming boom of artificial intelligence, they used some graphics methods. Therefore, they want me to do some research about painting a sketch to an image with GAN. I investigated about related articles and found the pix2pix project. Both the pix2pix and the scribbler focus on the generating of real pictures, such as faces, maps and bedrooms. They did not refer to the cartoon image generations. Therefore, I started my experiments.
The sketch synthesis algorithm is inspired by the project on the github: XDoG in matlab. I rewrite it in python as a filter to produce the training set. The training set of Minions was collected from the Baidu Images with a simple crawler I write myself. The training set of Japanimation was collected by my colleagues in Samsung on a cartoon website with a more complex crawler.
I followed the basic method from scribble to add new constraints on the pix2pix projects and found the results improved significantly. Both the pix2pix model and the auto-painter model were trained with the same training set and shared the same parameters. The only difference between them was the objective function. I didn’t make unfair comparison between them.
I think the addition parts of the objective function really work, and the network can be used in such an interesting field, so I write an article to share with others. I feel so sorry that I only concentrate on the published articles and ignore your project.
After I published my paper on the arXiv, someone forwarded it to Chinese WeiBo. I read about the comments and knew that there was a project called PainterChanier. I searched for the article but found nothing. Then I searched on the github, and finally I found you on the github.
So I write such a long letter to explain all the stories. This is the first time that I publish an article to the arXiv. I haven’t expect to cause such a big misunderstanding.
Best regards,
Irfan

Hi, irfanI.
thanks for commenting and explanation.
Sorry for suspicious about comparison, Reconstructing and comparing of image GAN project is very difficult, and there for I released both my code and trained model.
I m very happy, if you could add PaintsChainer and another project of coloring of illustration (https://arxiv.org/pdf/1704.08834.pdf) in reference

Its good to challenge new field and publish your work, so that you can get many feedbacks.
Please keep in touch.

All the code and the pre-train model belongs to the Samsung. I also release my code and data when I do other work. You can see my project EMM for stock predict. I will try to contact with Samsung to ask the right of releasing my code and pre-train model. And I try to finish my demo soon.

some times PaintsChainer perform "under coloring", I mean coloring is not enough as expected.
I think more strong ratio of adversarial loss is effective, but it also cause "over coloring", I mean coloring is too much colorful.
Maybe there are more smart way of balancing .