Out-Of-Memory Error in Validation Phase
Closed this issue · 4 comments
Why there will be a sudden Cuda memory usage increase in the validation phase?
The batch size of the validation phase set in the config file is smaller than the training phase, but there will be a sudden Cuda memory usage increase in the validation phase, which causes the OOM Error.
Specifically, when the model runs the code in run.py
,model.evaluate
will cost more Cuda memory than model.fit
, could you please help me solve this problem? Thanks for your attention.
Can you please tell me the model and dataset you use? I will check the code and fix this problem. I have encountered this problem before, but now I cannot reproduce it. Thank you!
Sure, thanks for your reply.
We use the DCN model with Avazu dataset, which is not contained as default datasets in RecStudio.
here is the dataset config file for avazu:
url: /root/autodl-tmp/avazu
user_id_field: &u user_id:token # TODO: comments for &u and *u
item_id_field: &i item_id:token
rating_field: &r label:float
time_field: &t timestamp:float
time_format: ~
encoding_method: ~
inter_feat_name: avazu_new.csv
inter_feat_field: ['item_id:token', 'label:float', 'timestamp:float', 'C1:token',
'banner_pos:float', 'site_id:token', 'site_domain:token',
'site_category:token', 'app_id:token', 'app_domain:token',
'app_category:token', 'device_id:token', 'device_ip:token',
'device_model:token', 'device_type:token', 'device_conn_type:token',
'C14:token', 'C15:token', 'C16:token', 'C17:token', 'C18:token',
'C19:token', 'C20:token', 'C21:token', 'user_id:token']
# dtype: ['object','float32','int8','int32','int16','int8','object','object','object','object','object','object','object','object',
# 'int8','int8','int16','int16','int16','int16','int8','int16','int32','int16']
inter_feat_header: 0
user_feat_name: ~
user_feat_field: ~
user_feat_header: ~
item_feat_name: ~
item_feat_field: ~
item_feat_header: ~
use_fields: [item_id, label, 'C1', 'site_id', 'site_domain', 'site_category', 'app_id', 'app_domain',
'app_category', device_id, 'device_ip', 'device_model', 'device_type', 'device_conn_type',
'C14', 'C15', 'C16', 'C17', 'C18',
'C19', 'C20', 'C21']
field_separator: ","
min_user_inter: 0
min_item_inter: 0
field_max_len: ~
low_rating_thres: ~
# ranker_rating_threshold: 1
binarized_rating_thres: 1.0
max_seq_len: 20
# network feature, including social network and knowledge graph, the first two fields are remapped the corresponding features
network_feat_name: ~ #[[social.txt], [ml-100k.kg, ml-100k.link]]
mapped_feat_field: ~
network_feat_field: ~
network_feat_header: [~, ~]
save_cache: false # whether to save processed dataset to cache.
and the config file of basemodel is shown as follows:
data: # params related to dataset
binarized_rating_thres: ~ # whether to binarized rating
fm_eval: False # whether to set fm_eval to organize the batch data as one interactions per sample.
# the sampler for dataset, only uniform sampler is supported now.
neg_count: 0
sampler: ~ # [uniform]
shuffle: True
split_mode: user_entry # [user, entry, user_entry]
split_ratio: [0.8,0.1,0.1] # list or int type, list type for split by ratio, int type for leave one out
model:
embed_dim: 8 # embedding dimension for embedding layers, usually for item and user embeddings
item_bias: False # whether to add item bias
train:
accelerator: gpu # [cpu, gpu, dp]
# ann: {index: 'IVFx,Flat', parameter: ~} ## 1 HNSWx,Flat; 2 Flat; 3 IVFx,Flat ## {nprobe: 1} {efSearch: 1}
ann: ~
batch_size: 512
early_stop_mode: max
early_stop_patience: 5
epochs: 20
gpu: 1
grad_clip_norm: ~
init_method: xavier_normal # [xavier_normal, normal]
item_batch_size: 512 # batch size for items to get all item features or get full item scores.
learner: adam
learning_rate: 0.001
num_threads: 10
# negative sampler configuration in training procedure
# `method` describes the retrieving method used to retrieve negative items with a retriever.
sampling_method: none # [none, sir, dns, toprand, top&rand, brute]
# `sampler` describes the negative sampler used to train models.
sampler: uniform # [uniform, pop, midx-uni, midx-pop, cluster-uni, cluster-pop]
negative_count: 0 # number of negative items to be sampled
excluding_hist: False # whether to exclude user history in negative sampling
# learning rate scheduler, refer to https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
scheduler: ~ # [onplateau, exponential]
seed: # random seed, usually 42 is a magic number
weight_decay: 0.0 # weight decay for the optimizer
tensorboard_path: ~
eval:
batch_size: 128
cutoff: [5, 10, 20]
val_metrics: [ndcg, recall]
val_n_epoch: 1
test_metrics: [recall, precision, map, ndcg, mrr, hit]
topk: 100
save_path: './saved/'
Thank you! I will fix this problem as soon as possible.