The extracted preprocessed feature is super big
ElegantLin opened this issue · 6 comments
Hi, I am extracting the pre-processed feature and it is super big, which is over several TB. The extraction command I use is tar zxvf **.tar.gz
. Could you please help me with this?
Thanks!
Hi,
The data shouldn't be that big, the total download size is around 130GB. The extraction might take some time, but the command looks right.
After extraction, I found the largest file is data.mdb, which is 700GB for every file. May I ask whether it is really that big?
Thanks!
Hi, can you do du -sh .
inside the extracted feature dir? It shouldn't be that big, those are precomputed image features.
I tried to extract the features at my end, there seems to be an issue with extracting mdb files. I would suggest you to try and do data preprocessing at your end (as given in the README in this section), while we try to fix this issue.
Thanks a lot! Please let me know when you finish fixing it.