ddebeau/zfs_uploader

Configuring different max part numbers for some S3 providers

Erisa opened this issue · 2 comments

Erisa commented

When setting up zfs_uploader against Scaleway Object Storage (Specifically their GLACIER tier), everything worked as expected except with one caveat: The max part number on Scaleway is 1000 rather than the 10000 used by AWS.

This resulted in an error when uploading with the default setup, since it calculated the part sizes based on 10,000 parts and eventually failed due to exceeding Scaleway's limit of 1,000 parts.

I resolved this for my use-case by simply modifying a number in job.py: Erisa@20ed42f however I feel that going forward it would be a good idea to allow configuration of this value in the zfs_uploader configuration file, and document it on the README.

You could also detect and change the values based on predefined provider limits, however it still would be nice to have the value in a user-configurable place.

I also think it would be a good idea to let users set the value. Would you write a PR to add it to the config file and readme?

Erisa commented

Sure, I can give that a go! I was planning to look into PRing this earlier but couldn't find the time, I can give it a closer look soonish.