kratzert/lstm_for_pub

Suggestion: Run specs and runtime benchmarking

Opened this issue · 0 comments

Thank you for publishing such well-documented code. One question I have is what computing specs were used to train the LSTM model and what the runtime benchmarking was? My team and I are trying to run this in a Docker container on AWS but are running into confounding memory errors, even with 192GB of RAM! It sounds like it is a problem with PyTorch interacting with Docker, but I can't be sure that the LSTM model itself isn't just super memory intensive. I think putting these specs would be helpful to anyone trying to replicate this. Thank you!