Artifacts around hair in the reproduced results
07hyx06 opened this issue · 3 comments
Hi, thanks for your work! I tried to reproduce the results and reimplement the training code of NeRFBlendshape. Currently, the rendering quality on the validation set is overall similar to the released model:
But there are some artifacts around the hair, especially on the side head around the ear, shown in the figure below:
I think the results are somewhat unexpected since the rendering quality is good in other regions, including some facial hair like mustache and eyebrow. I wonder whether you have met this problem during training. Can you provide some suggestions to avoid this artifact? Thanks a lot!
Here are my training schedules:
- 0~7 epoch: L1 color loss is only applied to randomly sampled pixels, with weight 1;
- 7~15 epoch: 0.5 prob to sample 32x32 patch and 0.5 prob to sample random pixels; when sample random pixels, L1 color loss is applied, with weight 1; when sample random patch, L1 color loss and LPIPS (vgg backbone) are applied, with both weight 0.1. When sample patch, 0.5 prob to sample around the mouth, and 0.5 prob for uniform sample in the image;
- the batch size is set to 1, i.e., only sample patch or random pixels in one image.
Another question is about the dataset. I find that there are N+1 frames in the provided mp4, but there are only N annotations in the json files. Is the 0th annotation corresponds to the 0th frame or the 1st frame?
Hi, thanks for your interest to reimplement our method.
I'm not very sure, but it seems caused by inefficient use of LPIPS. Maybe you need to restrict the sampled patches to make sure they do not go beyond head region too much, because sampling in empty space contribute little to your rendering quality in the final stage.
You could use cv2 to extract frames, will they still be different?
import cv2
import os
vid_file="id1.mp4"
cap = cv2.VideoCapture(vid_file)
frame_num = 0
while(True):
_, frame = cap.read()
if frame is None:
break
cv2.imwrite(os.path.join("imgs", str(frame_num) + '.jpg'), frame)
frame_num = frame_num + 1
cap.release()
Thanks for your helps! I will try to tune the LPIPS loss more carefully.
cv2 indeed gives the correct frame numbers. I used moviepy.editor.VideoFileClip
to dump the frames, but it gives the wrong frame numbers.
@07hyx06 您好!不好意思打扰您了!感觉您复现的效果很棒,请问可以一起交流下吗?因为毕设需要对比这篇论文,所以目前在复现这篇论文。如果可以的话,希望能有机会和您交流,这是我的邮箱tara857312@gmail.com,感激不尽!