It is werid to use GT mask for depth_pre in compute_depth_losses
lmomoy opened this issue · 1 comments
lmomoy commented
def compute_depth_losses(self, inputs, outputs, losses, accumulate=False):
"""Compute depth metrics, to allow monitoring during training
This isn't particularly accurate as it averages over the entire batch,
so is only used to give an indication of validation performance
"""
depth_pred = outputs[("depth", 0, 0)]
gt_height, gt_width = inputs['depth_gt'].shape[2:]
depth_pred = torch.clamp(F.interpolate(
depth_pred, [gt_height, gt_width], mode="bilinear", align_corners=False), 1e-3, 80)
depth_pred = depth_pred.detach()
depth_gt = inputs["depth_gt"]
mask = depth_gt > 0
# garg/eigen crop
crop_mask = torch.zeros_like(mask)
crop_mask[:, :, 153:371, 44:1197] = 1
mask = mask * crop_mask
depth_gt = depth_gt[mask]
depth_pred = depth_pred[mask]
depth_pred *= torch.median(depth_gt) / torch.median(depth_pred)
depth_pred = torch.clamp(depth_pred, min=1e-3, max=80)
depth_errors = compute_depth_errors(depth_gt, depth_pred)
for i, metric in enumerate(self.depth_metric_names):
if accumulate:
losses[metric] += np.array(depth_errors[i].cpu())
else:
losses[metric] = np.array(depth_errors[i].cpu())
If this operation will introduce GT information for prediction?
daniyar-niantic commented
Hi @lmomoy
As the comment to the function indicates, this is not used for training (no backpropogation on these metrics) but to monitor how well the model is doing during training.