slow upload speed to the thredds server in AWS
Closed this issue · 2 comments
It appears that upload speeds to the AWS thredds server are unreasonably slow, (AWS S3 to S3 transfer 10X this) whether files are coming from within AWS. Look into thredds server configuration or any other possible solutions to determine if anything can be done to speed up the transfer.
or externally.
to_offsite.py --host apsviz-sftp-conn.adcircprediction.org --port 2022 --username apsviz --credentials /home/mvb49270/scratch/forecasting/conf/apsviz.rsa --directory 2024/al14/05/egom_rt_v20b/sapelo2/egom_al14_nopp_sapelo/nowcast --list archived_files.txt
2024-10-06T22:04:34GMT :: INFO :: transport.py :: _log :: Connected (version 2.0, client SFTPGo_2.2.3)
2024-10-06T22:04:34GMT :: INFO :: transport.py :: _log :: Authentication (publickey) successful!
2024-10-06T22:04:34GMT :: INFO :: sftp.py :: _log :: [chan 0] Opened sftp connection (server version 3)
2024-10-06T22:04:34GMT :: INFO :: archive_to_offsite.py :: upload_files :: Connected to apsviz-sftp-conn.adcircprediction.org:2022
2024-10-06T22:04:34GMT :: INFO :: archive_to_offsite.py :: upload_files :: Begin uploading file: fort.61.nc to /2024/al14/05/egom_rt_v20b/sapelo2/egom_al14_nopp_sapelo/nowcast/fort.61.nc
2024-10-06T22:04:35GMT :: INFO :: archive_to_offsite.py :: upload_files :: Finished uploading file: fort.61.nc to /2024/al14/05/egom_rt_v20b/sapelo2/egom_al14_nopp_sapelo/nowcast/fort.61.nc [Transfer Rate: 3.70 MB/s (29.57 mbps)]
2024-10-06T22:04:35GMT :: INFO :: archive_to_offsite.py :: upload_files :: Begin uploading file: fort.62.nc to /2024/al14/05/egom_rt_v20b/sapelo2/egom_al14_nopp_sapelo/nowcast/fort.62.nc
2024-10-06T22:04:35GMT :: INFO :: archive_to_offsite.py :: upload_files :: Finished uploading file: fort.62.nc to /2024/al14/05/egom_rt_v20b/sapelo2/egom_al14_nopp_sapelo/nowcast/fort.62.nc [Transfer Rate: 10.51 MB/s (84.11 mbps)]
2024-10-06T22:04:35GMT :: INFO :: archive_to_offsite.py :: upload_files :: Begin uploading file: fort.63.nc to /2024/al14/05/egom_rt_v20b/sapelo2/egom_al14_nopp_sapelo/nowcast/fort.63.nc
2024-10-06T22:27:09GMT :: INFO :: archive_to_offsite.py :: upload_files :: Finished uploading file: fort.63.nc to /2024/al14/05/egom_rt_v20b/sapelo2/egom_al14_nopp_sapelo/nowcast/fort.63.nc [Transfer Rate: 6.25 MB/s (50.02 mbps)]
2024-10-06T22:27:09GMT :: INFO :: archive_to_offsite.py :: upload_files :: Begin uploading file: maxele.63.nc to /2024/al14/05/egom_rt_v20b/sapelo2/egom_al14_nopp_sapelo/nowcast/maxele.63.nc
2024-10-06T22:27:21GMT :: INFO :: archive_to_offsite.py :: upload_files :: Finished uploading file: maxele.63.nc to /2024/al14/05/egom_rt_v20b/sapelo2/egom_al14_nopp_sapelo/nowcast/maxele.63.nc [Transfer Rate: 5.30 MB/s (42.42 mbps)]
2024-10-06T22:27:21GMT :: INFO :: archive_to_offsite.py :: upload_files :: Begin uploading file: maxele.63.nc to /2024/al14/05/egom_rt_v20b/sapelo2/egom_al14_nopp_sapelo/nowcast/maxele.63.nc
2024-10-06T22:27:32GMT :: INFO :: archive_to_offsite.py :: upload_files :: Finished uploading file: maxele.63.nc to /2024/al14/05/egom_rt_v20b/sapelo2/egom_al14_nopp_sapelo/nowcast/maxele.63.nc [Transfer Rate: 5.46 MB/s (43.70 mbps)]
2024-10-06T22:27:32GMT :: INFO :: archive_to_offsite.py :: upload_files :: Begin uploading file: maxwvel.63.nc to /2024/al14/05/egom_rt_v20b/sapelo2/egom_al14_nopp_sapelo/nowcast/maxwvel.63.nc
2024-10-06T22:27:42GMT :: INFO :: archive_to_offsite.py :: upload_files :: Finished uploading file: maxwvel.63.nc to /2024/al14/05/egom_rt_v20b/sapelo2/egom_al14_nopp_sapelo/nowcast/maxwvel.63.nc [Transfer Rate: 6.38 MB/s (51.02 mbps)]
2024-10-06T22:27:43GMT :: INFO :: archive_to_offsite.py :: upload_files :: Begin uploading file: fort.64.nc to /2024/al14/05/egom_rt_v20b/sapelo2/egom_al14_nopp_sapelo/nowcast/fort.64.nc
2024-10-06T23:13:13GMT :: INFO :: archive_to_offsite.py :: upload_files :: Finished uploading file: fort.64.nc to /2024/al14/05/egom_rt_v20b/sapelo2/egom_al14_nopp_sapelo/nowcast/fort.64.nc [Transfer Rate: 6.83 MB/s (54.63 mbps)]
2024-10-06T23:13:13GMT :: INFO :: archive_to_offsite.py :: upload_files :: Begin uploading file: fort.73.nc to /2024/al14/05/egom_rt_v20b/sapelo2/egom_al14_nopp_sapelo/nowcast/fort.73.nc
2024-10-06T23:21:59GMT :: INFO :: archive_to_offsite.py :: upload_files :: Finished uploading file: fort.73.nc to /2024/al14/05/egom_rt_v20b/sapelo2/egom_al14_nopp_sapelo/nowcast/fort.73.nc [Transfer Rate: 16.44 MB/s (131.49 mbps)]
2024-10-06T23:21:59GMT :: INFO :: archive_to_offsite.py :: upload_files :: Begin uploading file: fort.74.nc to /2024/al14/05/egom_rt_v20b/sapelo2/egom_al14_nopp_sapelo/nowcast/fort.74.nc
i have taken numerous steps to alleviate this problem:
- reviewing logs to determine if there was excessive usage. I noted that there were a few instances of file uploads sourced from up to 4 requests at the same time.
- reviewing logs to determine if there were nefarious actors trying to disrupt operations. none found.
- modified the TDS file system archival strategy to wait 7 days before moving data to slower storage.
after review i came to the conclusion that the source of the slow down may be attributed to the SFTP server potentially being overwhelmed at peak times. as a result i implemented horizontal scaling to the deployment.
Done