peak/s5cmd

EntityTooLarge: fail to cp file with 6.3GB from one bucket to another bucket

tony710508 opened this issue ยท 11 comments

fail to cp file with 6.3GB from one bucket to another bucket

EntityTooLarge: Your proposed upload exceeds the maximum allowed object size

I have also just encountered this issue, has something changed?
Note: this is between s3 buckets and not from EBS to S3

Thanks for the report. We're using the official AWS SDK. Copying objects (from S3 to S3) larger than 5GB requires multipart uploading which we currently don't leverage.

https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html

You create a copy of your object up to 5 GB in size in a single atomic operation using this API. However, for copying an object greater than 5 GB, you must use the multipart upload Upload Part - Copy API.

We're going to use the multipart Copy API to support large file transfers between S3 buckets. Please note that uploading a large file to S3 works as expected.

aws/aws-sdk-go#2653 PR is required to address this issue.

Just checking if there is any news on this issues. I could make use of this feature in my workflow. Not sure that PR @igungor mentioned is merged/ what's stopping that merge.

Is there any update on the status of this issue?

batic commented

Similar problem is with sync when file is larger than 5G.

az-z commented

case:
source - has objects
dest - is empty

s3cmd sync source/* dest/

"sync" fails silently on the first run after copying some objects.
"sync" will fail on the second run and report the offending object

az-z commented

actually, bring us to the question - how do we exclude files from cp/sync ?

Any updates?

tweep commented

Ran into this today - any updates on this or is there a work-around ?