w1oves/Rein

About GTA5+SYNTHIA config file

Closed this issue · 9 comments

Hello, how to config the GTA5+SYNTHIA? In Release/source code/Rein-GTAV-Synthia/config/base/dataset/ your provided, I have not found the dg_gta_syn_xx.py config file. I try to set the config:

train_dataloader = dict(
batch_size=2,
num_workers=2,
persistent_workers=False,
pin_memory=False,
sampler=dict(type="InfiniteSampler", shuffle=True),
dataset=dict(
type="ConcatDataset",
datasets=[
{{base.train_gta}},
{{base.train_syn}}
],
),
)

but it work failed.

This config file in release is used for DG (Domain Generalization) settings, specifically converting GTAV+Synthia to Cityscapes+Bdd100k+Mapillary. You can verify this by examining the train_dataloader and val_dataloader within the file. Additionally, feel free to make direct modifications as needed.

However, I follow the config file to modify, the error happens: TypeError: ConcatDataset.init() got an unexpected keyword argument 'pipeline'

Please provide your modified config.

_base_ = [
    "./gta_512x512.py",
    "./syn_512x512.py",
    "./bdd100k_512x512.py",
    "./cityscapes_512x512.py",
    "./mapillary_512x512.py",
]
train_dataloader = dict(
    batch_size=2,
    num_workers=2,
    persistent_workers=False,
    pin_memory=False,
    sampler=dict(type="InfiniteSampler", shuffle=True),
    dataset=dict(
        type="ConcatDataset",
        datasets=[
            {{_base_.train_gta}},
            {{_base_.train_syn}}
        ],
    ),
)
val_dataloader = dict(
    batch_size=1,
    num_workers=4,
    persistent_workers=False,
    sampler=dict(type="DefaultSampler", shuffle=False),
    dataset=dict(
        type="ConcatDataset",
        datasets=[
            {{_base_.val_cityscapes}},
            {{_base_.val_bdd}},
            {{_base_.val_mapillary}},
        ],
    ),
)
test_dataloader = val_dataloader
val_evaluator = dict(
    type="DGIoUMetric", iou_metrics=["mIoU"], dataset_keys=["citys", "map", "bdd"]
)
test_evaluator=val_evaluator

This config seems right. Please provide generated entire config. It can be found at corresponded work_dirs.

The error occurs because in ConcatDataset, you should not set the pipeline argument. The pipeline for different datasets should be created or updated individually for each dataset. From the provided config, it's unclear where you added this error argument. You should double-check it.

I comment the code train_dataloader = dict(batch_size=4, dataset=dict(pipeline=train_pipeline)) in the Rein/configs/dinov2
/rein_dinov2_mask2former_512x512_bs1x4.py, the multi-source setting is work well. Need you comment this code in your code?

This config is only for single source. And pipeline should be added at the single dataset config. Thank you for pointing out that!