Pretrained PSPNet output is bad
andrearosasco opened this issue · 0 comments
andrearosasco commented
Hey there,
I tried to use the trained checkpoints to initialize the segmentation networks but when I test it on ycb images the results are really bad. Here's the script:
import argparse
import numpy as np
import matplotlib.pyplot as plt
import torch.utils.data
from torch.autograd import Variable
from lib.network import PoseNet
from segmentation.data_controller import SegDataset
dataset_root = '../../Desktop/YCB_Video_Dataset'
model = '../../Desktop/trained_checkpoints/ycb/pose_model_26_0.012863246640872631.pth'
# Model Initialization
estimator = PoseNet(num_points=1000, num_obj=21)
estimator.cuda()
estimator.load_state_dict(torch.load(model))
estimator.eval()
segmentation = estimator.cnn
# Test Dataset
test_dataset = SegDataset(dataset_root, 'datasets/ycb/dataset_config/test_data_list.txt', False, 1000)
test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_size=1, shuffle=True, num_workers=0)
rgb, data = next(iter(test_dataloader))
rgb, target = Variable(rgb).cuda(), Variable(target).cuda()
out = segmentation(rgb)
plt.imshow(np.array((torch.argmax(out, 1).cpu()) / 32)[0])
plt.show()
The imported library are not modified except for SegDataset: apparently, the images were normalized with mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
without being divided by 256
. Correcting this (assuming it was actually an error and I didn't miss anything) slightly improved the results.
What am I doing wrong? Is there any way to do what I'm trying to do or should I train the segmentation model from scratch?
Thank you in advance :)