output is good but metrics do not make sense
juanfelipegiraldoc opened this issue · 1 comments
I'm using this model, and it behaves well when I test it with the whole "test" folder from LEVIR. The output images look great. But the metrics do not make sense (as it can be seen in the screenshot). I thought it could be something I might change by mistake, so I went back to the original code and ran "demo_LEVIR.PY" and "eval_cd.py" and the output images look great again! But again, the metrics are like the ones in the screenshot. I cannot see what's happening with the metrics. Let's mention that for these last tests with "demo_LEVIR.PY" and "eval_cd.py" I've been using the original code, which means the only parameters I have changed are the ones in:
Any idea why this is happening?
eval_cd.py:
def main():
# ------------
# args
# ------------
parser = ArgumentParser()
parser.add_argument('--gpu_ids', type=str, default='0', help='gpu ids: e.g. 0 0,1,2, 0,2. use -1 for CPU')
parser.add_argument('--project_name', default='ChangeFormer_LEVIR', type=str)
parser.add_argument('--print_models', default=False, type=bool, help='print models')
parser.add_argument('--checkpoints_root', default='checkpoints', type=str)
parser.add_argument('--vis_root', default='vis', type=str)
# data
parser.add_argument('--num_workers', default=8, type=int)
parser.add_argument('--dataset', default='CDDataset', type=str)
parser.add_argument('--data_name', default='quick_start_LEVIR', type=str)
parser.add_argument('--batch_size', default=1, type=int)
parser.add_argument('--split', default="test", type=str)
parser.add_argument('--img_size', default=256, type=int)
# model
parser.add_argument('--n_class', default=2, type=int)
parser.add_argument('--embed_dim', default=256, type=int)
parser.add_argument('--net_G', default='ChangeFormerV6', type=str,
help='base_resnet18 | base_transformer_pos_s4_dd8 | base_transformer_pos_s4_dd8_dedim8|')
parser.add_argument('--checkpoint_name', default='best_ckpt.pt', type=str)
args = parser.parse_args()
utils.get_device(args)
print(args.gpu_ids)
# checkpoints dir
args.checkpoint_dir = os.path.join(args.checkpoints_root, args.project_name)
os.makedirs(args.checkpoint_dir, exist_ok=True)
# visualize dir
args.vis_dir = os.path.join(args.vis_root, args.project_name)
os.makedirs(args.vis_dir, exist_ok=True)
dataloader = utils.get_loader(args.data_name, img_size=args.img_size,
batch_size=args.batch_size, is_train=False,
split=args.split)
model = CDEvaluator(args=args, dataloader=dataloader)
model.eval_models(checkpoint_name=args.checkpoint_name)
Thanks.
the way to fix it is:
add self.label_transform = "norm" to get_data_config
Small mistake but takes time to find out where is the problem.