Dots in bucket name let connection fail due to hostname mismatch (requests.exceptions.SSLError)
Opened this issue ยท 8 comments
When trying to upload to a bucket that has .
in its name, this happens:
requests.exceptions.SSLError: hostname 'some.bucket.name.s3-eu-west-1.amazonaws.com' doesn't match either of 's3-eu-west-1.amazonaws.com', '*.s3-eu-west-1.amazonaws.com', 's3.eu-west-1.amazonaws.com', '*.s3.eu-west-1.amazonaws.com', 's3.dualstack.eu-west-1.amazonaws.com', '*.s3.dualstack.eu-west-1.amazonaws.com', '*.s3.amazonaws.com'
TLS is enabled. The requests
version is 2.11.1.
As far as I know, it's correct that requests is complaining here, as a wildcard certificate doesn't cover an arbitrary depth of subdomains. The root cause here are the dots in the bucket name.
It would be good if one could configure tinys3 to not validate the server certificate. See http://docs.python-requests.org/en/master/user/advanced/#ssl-cert-verification
Meanwhile I found that the behaviour explained above happens with tinys3 version 0.1.12
but not with 0.1.11
.
I get
NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7faf4329dbe0>: Failed to establish a new connection: [Errno -2] Name or service not known',))
when trying to upload to a bucket that has a period in the name. ive confirmed that it works with buckets with no period in the name.
I'm still getting this issue. I'm unable to use tinys3 to upload to buckets with dots in the names.
Confirm that it's working with 0.1.11.
sudo pip uninstall tinys3
sudo pip install 'tinys3==0.1.11'
The error is because new version is using uploading url like "bucket.s3.amazonaws.com/upload_key", but previous version is using url like "s3.amazonaws.com/bucket/upload_key".
@Scylardor
I see you changed the connection url using subdomain instead of folder, you did it here: 308e3b9
You commented that commit: ..."Come back of the virtual-hosting URL style as the old one actually isn't working with a recent bucket"...
It was 2+ years ago, but... maybe you remember what exactly was not working with alike url? :)
I wanted to make a pull request to fix current issue, but I'm not sure if it will brake anything for another users and cases.
Hi,
to be honest I never had or thought about this issue before :s
According to a quick google search, yeah, it seems there's an issue with SSL wildcards not matching buckets with dots in their names while using the new path syntax (check here) so I guess it's "expected behavior". :(
Their suggestion is using the old-style path syntax in those cases.
Apparently boto has had the same issue but now it's fixed. They use some sort of config in which you can tell it to use the "old syntax" instead.
Indeed it's been a while so it would take me a bit to get my testing setup back on track. I could do it but it will take time.
But as I see it the fix should be easy... It should boil down to this line of the request factory
I think we could do the same as boto. They store the boolean to know whether path style to use (calling_format
) at the connection level, which I think is good (since the problem comes from the bucket name).
All that would be needed would be to store this boolean from the conn in the S3Request init.
Then we could know which path style we must use in bucket_url
.
Don't hesitate if you have more problems I'm kind of on-the-go I can't easily look at all the code right now but maybe later.
Merging #55 would be great! Been a while ๐