Evaluation on training set
Closed this issue · 2 comments
kuba1302 commented
Hey,
i have a problem with evaluation using this framework. Im using it to train UMT model and I can't find a way to calculate mAP on the train dataset. Is it possible?
yeliudev commented
You may simply change the config file (e.g., here) to change the validation dataset.
kuba1302 commented
I can see that in case of this dataset, there are separate files.
Im using Youtube pipeline.
This is my dataset/utils/static.py
YOUTUBE_SPLITS = {
"model-80-videos" : {
'train': ['brsmungVdYc', ...],
'val': ['sHK03FstNZE', '..]
}
}
The config for youtube is same as in repo:
_base_ = 'datasets'
# dataset settings
dataset_type = 'YouTubeHighlights'
data_root = 'data/youtube/'
data = dict(
train=dict(
type=dataset_type,
domain=None,
label_path=data_root + 'youtube_anno.json',
video_path=data_root + 'video_features',
audio_path=data_root + 'audio_features',
loader=dict(batch_size=4, num_workers=4, shuffle=True)),
val=dict(
type=dataset_type,
domain=None,
label_path=data_root + 'youtube_anno.json',
video_path=data_root + 'video_features',
audio_path=data_root + 'audio_features',
loader=dict(batch_size=1, num_workers=4, shuffle=False)))
I contrast to the file that you have provided, here is is only one file.
The only workaround that i've found is to specify same labels in static.py
for train and test. Yet this doesn't really seem like a clean approach. Is it possible to get evaluation during training on both train and eval?