NVIDIA dataset broken for GNMT?
Closed this issue · 5 comments
jeff-yajun-liu commented
training_results_v0.6/NVIDIA/benchmarks/gnmt/implementations/ verify_dataset.sh always came up with errors, is the dataset no longer valid? It is showing 12 GB after finishing data_download.sh script.
ipvirg commented
Had encounter similar scenario where the data downloaded was corrupted. Rename or delete the data directory, and then re-download the dataset fresh.
jeff-yajun-liu commented
I have tried that on two different systems with identical configurations, I am wondering if I can 1) reproduce the benchmark; 2) prep the dataset without script.
On Saturday, August 29, 2020, 11:27:56 AM PDT, ipvirg <notifications@github.com> wrote:
Had encounter similar scenario where the data downloaded was corrupted. Rename or delete the data directory, and then re-download the dataset fresh.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
jeff-yajun-liu commented
The issues resolved and we can close it off due to python SoftLink issue.
On Saturday, August 29, 2020, 11:27:56 AM PDT, ipvirg <notifications@github.com> wrote:
Had encounter similar scenario where the data downloaded was corrupted. Rename or delete the data directory, and then re-download the dataset fresh.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
jeff-yajun-liu commented
Had encounter similar scenario where the data downloaded was corrupted. Rename or delete the data directory, and then re-download the datase
jeff-yajun-liu commented
issue resolved by reconfiguring python soft link