changyanchuan/SARN

SPD_dict

Rishikesh2338 opened this issue · 9 comments

when the SPD task run for first time getting this error give the None type object
Screenshot 2024-06-04 at 2 00 36 PM

Hi @Rishikesh2338 ,
Thanks for your interest on our work.

I cloned the code and downloaded the dataset, i.e., running everything from scratch. I run the two command lines that are mentioned in the readme as follows, while having not meet any error.

first,
python train.py --task_encoder_model SARN --dataset SF
, then,
python train.py --task_encoder_model SARN_ft --dataset SF --task_name spd --task_pretrained_model

Did you make changes on code? Could you please provide more context of the error?

Thanks it resolved
can you tell me how do you have calculated lonlat_units?

lonlat_units = {

'OSM_Chengdu_2thRings_raw': {'lon_unit': 0.010507143, 'lat_unit': 0.008982567},

'OSM_Beijing_2thRings_raw': {'lon_unit': 0.01172332943, 'lat_unit': 0.00899280575},

'OSM_SanFrancisco_downtown_raw': {'lon_unit': 0.00568828214, 'lat_unit': 0.004496402877}

}

and the below is the BJ what is 'dataset_lon2Euc_unit', 'dataset_lat2Euc_unit' here?
elif 'BJ' == cls.dataset:

cls.dataset_prefix = 'OSM_Beijing_2thRings_raw'

cls.trajsimi_prefix = 'tdrive_len60_10k_mm_from_48k' # trajsimi test

cls.dataset_lon2Euc_unit = 0.00001172332943

cls.dataset_lat2Euc_unit = 0.00000899280575

cls.trajdata_timestamp_offset = 1201824000

cls.spd_max_spd = 19044

Hi @changyanchuan I am training for the 10 lakhs segments but osm_loader.py is not working

Thanks it resolved can you tell me how do you have calculated lonlat_units?

lonlat_units = {

'OSM_Chengdu_2thRings_raw': {'lon_unit': 0.010507143, 'lat_unit': 0.008982567},

'OSM_Beijing_2thRings_raw': {'lon_unit': 0.01172332943, 'lat_unit': 0.00899280575},

'OSM_SanFrancisco_downtown_raw': {'lon_unit': 0.00568828214, 'lat_unit': 0.004496402877}

}

and the below is the BJ what is 'dataset_lon2Euc_unit', 'dataset_lat2Euc_unit' here? elif 'BJ' == cls.dataset:

cls.dataset_prefix = 'OSM_Beijing_2thRings_raw'

cls.trajsimi_prefix = 'tdrive_len60_10k_mm_from_48k' # trajsimi test

cls.dataset_lon2Euc_unit = 0.00001172332943

cls.dataset_lat2Euc_unit = 0.00000899280575

cls.trajdata_timestamp_offset = 1201824000

cls.spd_max_spd = 19044

I assume you are mentioning the code between lines 50 and 62 in ./utils/osm2roadnetwork.py and the code between 142 and 150 in ./config.py.

lon_unit (similar to cls.dataset_lon2Euc_unit) is the length of one meter in longitude for the specific area.

Hi @changyanchuan I am training for the 10 lakhs segments but osm_loader.py is not working

Sorry, we haven't tested on such sizes of datasets. You may need to use sparse matrix-based graphs and consider multiple GPUs. Otherwise, even for native GNN methods, it is still a hard problem to train a graph with 10M node on a single small GPU.

Can we do minibatch sampling to scale to large size graphs?. Before graph corruption we take only small city area then run the model by using batch training

Where can we get trajectory dataset from any other state or city that you have used?

Where can we get trajectory dataset from any other state or city that you have used?

The road networks of CD and BJ can be obtained on OpenStreetMap. You need to apply for the request of the trajectory datasets of CD and BJ on DiDi Open Data Platform (see Reference [1] in the paper). They are not public, so I cannot share on Github.

Can we do minibatch sampling to scale to large size graphs?. Before graph corruption we take only small city area then run the model by using batch training

I think so. Sampling can be a solution for embedding large graphs in my opinion.