DamonOehlman/filestream

Could use a browser-ready example

Closed this issue · 7 comments

The example usage in the README uses require(), which means I need to involve browserify somehow I guess, to make it work? I'm not sure how to start with that - I browserify-ed the example JS file given, drag-n-drop.js, and then made an HTML file that loads it, but it broke on an obscure error.

How do I get started giving filestream a shot in my browser?

That's probably a reasonable request, though it's possible that http://requirebin.com/ or beefy will meet your needs nicely (I use beefy all the time). Also, I think I've tracked down the problem you were having as the example relies on communicating some mime type information to properly create the blob on the once the file has been uploaded to the browser.

Rightly so, @feross removed the mime-component dependency as it was super heavy weight and in most cases was not required for filestream to be useful on it's own. I've made a small tweak to the reader API that allows you to specify an external mime registry which can be used for lookup information.

The drag-n-drop.js example has been updated to use mime-component for it's lookup information, So you should be able to do the following (in your existing clone of the repository) and get the code working:

git pull
npm install
npm install -g beefy
beefy examples/drag-n-drop.js --open

Thank you! I didn't know about beefy, and now the example works perfectly. I appreciate the immediate help.

I think the issue can be closed, but I do have a question -- is this a true streaming file upload solution in the browser? In other words, can the FileReader object avoid loading the entirety of a file into memory at any point? I did some research with @tmcw on the subject, and it didn't look like browsers supported a true streaming upload API yet.

Good question, and yes I think you are right that the browsers currently load the file completely into memory at this stage, though it's been a while since I've done any serious investigation on the topic. Having a quick reread of the spec though, it definitely seems like the FileReader implementation is designed to read a file entirely into memory (see wording such as this).

Over the last 27 minutes, I spoke with @maxogden about it, and he confirmed that it is actually possible to pull this off without loading the file into memory.

The FileReader API's done method is as you linked and needs to load the whole File or Blob into memory. But you can actually, given a File object (which inherits methods from Blob) that you would get from a drag and drop callback or the like, call Blob.slice and have it give you a sliced-off chunk (also a Blob) of the original file, lazily calculated, without loading the whole file into memory. That Blob is what you instantiate a FileReader around, which then operates by loading the whole "chunk" into memory, but that chunk can be whatever size you want.

This is what @maxogden is doing in https://github.com/maxogden/filereader-stream, using 8128-byte chunks as a default.

And that's why @maxogden is one of my favourite people :)

Random note: if you wanna upload a 1tb file to s3 directly from the browser:

links to help figure out how to craft the xhr:

this would be cool to have as a module, e.g.

var upload = require('s3-resumable-upload-stream')
someFilereaderStreamInstance.pipe(upload('bucketNameOrPathOrSomething'))

@maxogden, so you know, there is a module out there that does the S3 streaming pipe interface via multipart uploads:

https://github.com/nathanpeck/s3-upload-stream

And I am using it in my code now. It's really great. I had some wild issues with browser XHRs at first, but after patching a missing setImmediate call, I can't reproduce the buffer issue I isolated (and if I could, I know how to fix it now). So that's coming along!