SciSharp/SiaNet

GPU memory leak on big validation data-set

falahati opened this issue · 2 comments

Training with about 1million records (each having 30 values) memory usage on GPU is quite low with my model.
However, as soon as I add 5,000 validation record to the mix, after first epoch, it skyrockets to 3.6GB. Any higher and I can't run it on my GPU.

I double checked the code and it seems that the library indeed does run the validation on batches with the same size as the training batches (in my case 64). However, the memory leak is still there. Can somebody check the reason behind it?

Model is pretty simple:

            _model = new Sequential();
            _model.Add(new Reshape(Shape.Create(1, _train.XFrame.Shape[1]), Shape.Create(_train.XFrame.Shape[1])));
            _model.Add(new Dropout(0.5));
            _model.Add(new LSTM(150, Shape.Create(1, _train.XFrame.Shape[1])));
            _model.Add(new Dense(_train.YFrame.Shape[1]));

Solved by this:
falahati@9f99b4a

Closing this issue since no activity and migrated to new approach