RenzKa/sign-segmentation

How feature-label alignment issue is solved in action segmentation?

Opened this issue · 0 comments

Dear autors,

When extracting, for example, I3D features from videos, you have a window width and stride, and multi-class labels per-frame, so you get a different number of label and arrays extracted characteristics, so how are you supposed to compare them ? I know you have been using this code to solve it, but how did you decide that this is the best option to tackle the problem ? And what is its influence in training and evaluation?

def dilate_boundaries(gt):
eval_boundaries = []
for item in gt:
gt_temp = [0,0]+item+[0,0]
con = 0
for ix in range(2, len(item)+2):
if con:
con = 0
continue
if gt_temp[ix] == 1 and gt_temp[ix+1] == 0 and gt_temp[ix+2] == 0:
gt_temp[ix+1] = 1
con = 1
if gt_temp[ix] == 1 and gt_temp[ix-1] == 0 and gt_temp[ix-2] == 0:
gt_temp[ix-1] = 1
eval_boundaries.append(gt_temp[2:-2])
return eval_boundaries
gt_list_eval = dilate_boundaries(gt_list)

self.gt_dict = {}
for ix, video in enumerate(self.vid_list):
self.gt_dict[video] = gt_list[ix][int(args.num_in_frames/2):-int(args.num_in_frames/2)+1]
assert len(self.gt_dict[video]) == len(self.features_dict[video])

self.eval_gt_dict = {}
for ix, video in enumerate(self.vid_list):
self.eval_gt_dict[video] = gt_list_eval[ix][int(args.num_in_frames/2):-int(args.num_in_frames/2)+1]
assert len(self.eval_gt_dict[video]) == len(self.features_dict[video])

Thank you in advance