waynehoover/s3_direct_upload

CORS troubles

303devworks opened this issue · 23 comments

Any idea whey I consistently get "Origin http://localhost:3000 is not allowed by Access-Control-Allow-Origin." error on s3?

I've tried several different projects to get this working including the Ryan Bates Screencast and this one: http://pjambet.github.com/blog/direct-upload-to-s3/. Always the same result. My app works fine with a standard carrierwave direct upload (so I know my bucket and credentials are ok). But once I introduce ajax into the mix 100% failure with the error above.

I saw this bug reported: http://code.google.com/p/chromium/issues/detail?id=67743 so I switched from chrome to safari and then firefox. Same result. I deleted my bucket and recreated and reset up CORS. I tried CORS with http://0.0.0.0:3000, http://localhost:3000, * for the allowed origin.

Any ideas? Driving me nuts.

In S3, you have to set your CORS rule for the bucket to something like:

<CORSConfiguration>
    <CORSRule>
        <AllowedOrigin>http://localhost:3000</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

Maybe if that doesn't work try accessing your site from http://0.0.0.0:3000 and add that to the cors rule allowed origin. If that doesn't work you could always just do

<CORSConfiguration>
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

but beware that that will allow any origin to post to that bucket.

I already tried what you suggested. But I tried again for kicks, same thing. Here it is wide open: http://cl.ly/LlXX

Only thing I can think of now is to try a different s3 account....

Hmm, are you sure your uploading to the correct bucket? Check your network tab in Chrome.

k13n commented

I had the same issue as described above and realized that in my case the problem was the URL of the Amazon bucket. The method S3DirectUpload::UploadHelper::S3Uploader#url builds a URL of the form http://s3.amazonws.com/<bucket_name>/", while in my case the URL had to be http://<bucket_name>.s3.amazonws.com/".

Therefore I monkey-patched the method like this and now the upload works fine:

module S3DirectUpload
  module UploadHelper
    class S3Uploader
      def url
        "http#{@options[:ssl] ? 's' : ''}://#{@options[:bucket]}.#{@options[:region]}.amazonaws.com/"
      end
    end
  end
end

According to the Amazon documentation both URL types are allowed and therefore I wonder whether this is due to a configuration problem on my side?

The reason we switched from http://<bucket_name>.s3.amazonws.com/" to http://s3.amazonws.com/<bucket_name>/" was because of DNS slowdows that are caused when your bucketname has a _ in the name. Basically S3 has to do some conversions when you use certain characters in your bucketname and s3 also throws a warning when this happens.

When you use the http://s3.amazonws.com/<bucket_name>/ format s3 doesn't have to do any conversions and there are no warnings.

So, its best to change the URL in your bucket to use the http://s3.amazonws.com/<bucket_name>/ convention. We should probably document this in the readme. Can you give a step by step breakdown of how in AWS you do this?

k13n commented

I couldn't find a way to change the URL to http://s3.amazonws.com/<bucket_name> on the Amazon S3 management console. Moreover, most of the articles on Amazon's developer page use also the other notation, like for example this one: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html

I would be happy to try different settings if I only found the places to make the settings :-)

@k13n Coming at this from another angle, why does your app require that your urls be in this format:

http://<bucket_name>.s3.amazonws.com/" ? As http://s3.amazonws.com/<bucket_name> should work just as well.

k13n commented

When I remove the monkeypatch that I posted above I get the following error message of my browsers (Chrome and Safari):
XMLHttpRequest cannot load https://s3.amazonaws.com/<bucket>/. Origin http://localhost:3000 is not allowed by Access-Control-Allow-Origin.

When taking a closer look at the requests that the browser issues I can see that Chrome sends a preflight request for CORS and gets a HTTP 301 Moved Permanently response. However, with the monkeypatch enabled the OPTIONS request gets a HTTP 200 OK response.

Interestingly enough, Amazon doesn't send a Location header when answering with the 301 status, so it could be that the browser doesn't know where to go next and quits the preflight request. What do you think @waynehoover?

What is your CORS configuration for that bucket?

k13n commented

Now I got it working without the monkeypatch. The problem was that I didn't specify the region in the configuration of S3DirectUpload. Therefore Amazon redirected my request to the proper region but probably my browser didn't allow that redirect for preflight requests.

Anyway, for completeness here is also the CORS configuration. It is very permissive, because I wanted to make sure the problem is not due to a wrong configuration on Amazon's site.

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>POST</AllowedMethod>
        <AllowedMethod>PUT</AllowedMethod>
        <AllowedHeader>*</AllowedHeader>
    </CORSRule>
</CORSConfiguration>

Thank you for your help @waynehoover!

Glad you figured it out. Hopefully this will help someone in the future. :)

I ran into the same problem as @k13n.
Should region setting of S3DirectUpload configuration become required instead of optional?

If you create a bucket in the AWS default region of 'US Standard' (s3-us-east-1) you don't need to set the value in the config. Any other region requires this value to be set.

It doesn't need to become required but maybe it could be better documented in the comments:

# region prefix of your bucket url (required for non-default AWS region), eg. "s3-eu-west-1"

Thankyou so much. I've been having the same problem. Applying @k13n's MonkeyPatch solved the problem. This bug needs to be fixed!

Not sure that this is actually a bug, but rather a documentation issue.

I ran into the same problem, I chose Ireland region when creating a bucket and forgot about that. I took me a while till I found solution here.

+1 to change docs

Good idea, I updated the Readme with a note that this is a required setting for the non-default AWS region.

I did what @k13n did with c.region, but also changed my url:

  c.region          = "oregon"             
  c.url             =  "https://#{c.bucket}.s3.amazonaws.com/"               

where bucket is set as c.bucket = ENV[...]

Where did you put the monkeypatch?

yeah where to put monkeypatch pls?

Just create a new initializer in your Rails app:

# config/initializers/s3_direct_upload.rb
module S3DirectUpload
  module UploadHelper
    class S3Uploader
      def url
        "http#{@options[:ssl] ? 's' : ''}://#{@options[:bucket]}.#{@options[:region]}.amazonaws.com/"
      end
    end
  end
end

Thanks a lot, I tried, but still getting the 400 BAD request as a response from AWS.
I also went back to first phase of this and, set up Ryan's 383 version. Still got the 400 Bad request mistake..

I guess my problem is either with AWS settings (i did not set up IAM user, using root settings). However, i don't know which type of 400 Bad request i am getting (Is there a way i can dig in that, i really want to know what type of 400 bad request it is? ) I tried manually add file on S3 it worked, I tried uploading with carrierwawe it worked.
However this s3_direct_upload doesn't.
One other possibility is regarding 2.hours.from_now.utc or 10.hours etc.. I live in Istanbul, using frankfurt region.. shall I try all 1..24.hours.from_now?

Folks, the problem was because of region selection in my case. Mainly cause of Frankfurt.. it requires signature type v4.
++ for url alterations u don't need monkey patch, u can use c.url field to change it anyway you want.
just use this version and it works well https://github.com/RobotsAndPencils/s3_direct_upload