DQiaole/ZITS_inpainting

lsm_hawp_inference.py_result_bad

Closed this issue · 13 comments

I try to use the lsm_hawp_inference.py to generate the .pkl of my dataset (place365).
I used the best_lsm_hawp.pth which you provided.
But the result is really bad.
I tried reduce threshold=0.8 > 0.5 but it still has bad result.

Do you have the best_palce365_lsm_hawp.pth?
Or how do we train our own hawp.

The iamge is the sample from training.(14001.jpg)
image

Hi~
You can try the test images we provide under directory 'test_imgs' and if the results are consistent with ours, there is no problem.

I tried the 'test_imgs' the results are consistent with yours.
But when I choose my own train dataset the line image will be bad so the training model will not be good at line image.

Do you have some suggest for choosing image?

Image is the line image result by my own dataset.
image

I think it's due to the lack of meaningful lines (wireframes) in your images themselves.
As for choosing images, I think our indoor images are a good choice.

image
I also encountered this problem, training with the Indoor dataset, which is the 96100th picture, and the line segment does not work well.

Please try this first.
#33 (comment)

Hi,have you trained on the Indoor dataset and how is the result?

Please try this first. #33 (comment)

ok!!! I will try it later. The lab having a power cut.

I tried the 'test_imgs' the results are consistent with yours. But when I choose my own train dataset the line image will be bad so the training model will not be good at line image.

Do you have some suggest for choosing image?

Image is the line image result by my own dataset. image

Hi!!! Have you trained in the Indoor data set? How's the result?

@YunBingbing bad too.
I read HAWP and MST paper. The new LSM-MHAWP should not be bad like that.
Maybe the dataset of the pretraind model is too deffrient with my train dataset.
So I give up the line image.

This is the result of the test image,image

Your results are clearly problematic. I think there may be some problems with your environment, as HAWP is sensitive to the environment. You can refer to the environment configuration we provide.

Your results are clearly problematic. I think there may be some problems with your environment, as HAWP is sensitive to the environment. You can refer to the environment configuration we provide.

image
Thank you very much! I reconfigured the environment and found that the line segment generation worked well. I will train on the indoor dataset later. I have a small question to ask, how do I convert a file in pkl format to jpg?

def load_wireframe(self, idx, size):
selected_img_name = self.image_id_list[idx]
line_name = self.line_path + '/' + os.path.basename(selected_img_name).replace('.png', '.pkl').replace('.jpg', '.pkl')
wf = pickle.load(open(line_name, 'rb'))
lmap = np.zeros((size, size))
for i in range(len(wf['scores'])):
if wf['scores'][i] > self.wireframe_th:
line = wf['lines'][i].copy()
line[0] = line[0] * size
line[1] = line[1] * size
line[2] = line[2] * size
line[3] = line[3] * size
rr, cc, value = skimage.draw.line_aa(*to_int(line[0:2]), *to_int(line[2:4]))
lmap[rr, cc] = np.maximum(lmap[rr, cc], value)
return lmap

def load_wireframe(self, idx, size):
selected_img_name = self.image_id_list[idx]
line_name = self.line_path + '/' + os.path.basename(selected_img_name).replace('.png', '.pkl').replace('.jpg', '.pkl')
wf = pickle.load(open(line_name, 'rb'))
lmap = np.zeros((size, size))
for i in range(len(wf['scores'])):
if wf['scores'][i] > self.wireframe_th:
line = wf['lines'][i].copy()
line[0] = line[0] * size
line[1] = line[1] * size
line[2] = line[2] * size
line[3] = line[3] * size
rr, cc, value = skimage.draw.line_aa(*to_int(line[0:2]), *to_int(line[2:4]))
lmap[rr, cc] = np.maximum(lmap[rr, cc], value)
return lmap

Thank you!!!!