S3 500-level errors render clj-aws-s3 unusable?
Closed this issue · 3 comments
We've been working to track down a strange issue for a week or two, and believe we have isolated it. My theory is that when S3 generates a 500-level server issue, the AmazonS3Client
instance is no longer usable and all requests generate "Stream Closed" exceptions.
I believe the memoization of s3-client
as referenced in #3 has made it such that once we get such a server exception, we cannot use clj-aws-s3 without a JVM restart. I don't have a great way to replicate and prove this just yet, however. I'm working on it though.
Presuming so, I see a few possible patches I could submit that fix the issue but remain usuable for the use case described in #3 but would like your input - make memoization optional, create a with-client
form of invoking the methods that lets the invoker decide on a client re-use strategy, or wrap calls to most methods in a handler that will regenerate the s3-client
in the event of an exception.
Thoughts?
I think I might have seen this behaviour as well, but it only happened once and I reloaded before I thought to check it out more thoroughly.
Regenerating the s3-client if an error occurs seems like the solution that has the least impact and the most practical use. I wonder if there's any instance where accessing the client would be useful.
Just realized I never updated this. I couldn't reproduce this error reliably enough to be confident of a patch. If ever I do, I'll submit a pull request. Until then, we use a patch that does the folllowing:
(defn new-s3-client [cred]
"Create an AmazonS3Client instance from a map of credentials. Included as a workaround for possible aws.sdk.s3 client memoization issues"
[cred]
(AmazonS3Client.
(BasicAWSCredentials.
(:access-key cred)
(:secret-key cred))))
(with-redefs [clj-aws-s3/s3-client new-s3-client]
...)
Unreproducible. closing.