S3 File Uploads don't work with AWS SDK v3.
0xi4o opened this issue · 6 comments
Support plan
- which support plan is this issue covered by? (e.g. Community, Sponsor, or
Enterprise): Community - is this issue currently blocking your project? (yes/no): No
- is this issue affecting a production system? (yes/no): No
Context
- node version: 14.17.1
- module (formidable) version: v2
- environment (e.g. node, browser, native, OS): Node.js
- used with (i.e. popular names of modules): @aws-sdk/client-s3
- any other relevant information:
What are you trying to achieve or the steps to reproduce?
The S3 file upload example worked perfectly when using aws-sdk v2. In version 3, AWS has split SDKs for individual services so we're only importing what's required. I tried using the @aws-sdk/client-s3
package in Node.js along with Formidable.
const uploadStream = (file) => {
const passThrough = new stream.PassThrough();
const params = {
ACL: 'public-read',
Body: passThrough,
Bucket: <BUCKET_NAME>,
Key: file.originalFilename
};
// This works
s3Client.upload(params, (error, data) => {
if (error) {
console.log('Error: ', error);
}
console.log('Data: ', data);
});
// This does not work. This is the method to upload to S3 in aws-sdk v3.
s3Client.send(new PutObjectCommand(params))
.then(response => {
console.log('Response: ', response);
})
.catch(error => {
console.log('Error: ', error);
});
return passThrough;
}
const upload = async (req, res) => {
try {
const form = formidable({
fileWriteStreamHandler: uploadStream
});
form.parse(req, (error, fields, files) => {
if (error) {
res.status(400).json({ code: 'upload_failed', error: error.message });
return;
}
res.status(200).json({ code: 'upload_success', fields, files });
});
} catch (error) {
res.status(400).json({ code: 'upload_fail', error });
}
}
What was the result you got?
Got an error in the console. I tried searching for this error thinking this might be an issue with the SDK but from what I could find, this is the error the AWS SDK sends whenever filename or content is empty. I'm confused as to why it would work in v2 but not in v3.
NotImplemented: A header you provided implies functionality that is not implemented
at deserializeAws_restXmlPutObjectCommandError (~/app/node_modules/@aws-sdk/client-s3/dist/cjs/protocols/Aws_restXml.js:8254:41)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
at async ~/app/node_modules/@aws-sdk/middleware-serde/dist/cjs/deserializerMiddleware.js:6:20
at async ~/app/node_modules/@aws-sdk/middleware-signing/dist/cjs/middleware.js:12:24
at async StandardRetryStrategy.retry (~/app/node_modules/@aws-sdk/middleware-retry/dist/cjs/StandardRetryStrategy.js:51:46)
at async ~/app/node_modules/@aws-sdk/middleware-logger/dist/cjs/loggerMiddleware.js:6:22 {
Code: 'NotImplemented',
Header: 'Transfer-Encoding',
RequestId: '0741R14HV6YK6WQY',
HostId: 'uNMoIDhpb3v7cswhj1WgQUxMfKQkDZGaocIPbe0yTxAnxENHKrK4SZghiAneJWPoeNXOBkUKaj8=',
'$fault': 'client',
'$metadata': {
httpStatusCode: 501,
requestId: undefined,
extendedRequestId: 'uNMoIDhpb3v7cswhj1WgQUxMfKQkDZGaocIPbe0yTxAnxENHKrK4SZghiAneJWPoeNXOBkUKaj8=',
cfId: undefined,
attempts: 1,
totalRetryDelay: 0
}
}
What result did you expect?
Expected upload to work
it seems Transfer-Encoding was sent but not expected,
but the bug does not come from formidable
try to open an issue here https://github.com/aws/aws-sdk-js-v3
Was there any resolution here for streaming file uploads to S3? It seems like every solution now requires either a buffered body, or using a signed-url and a separate request.
@AustinGil I'll take a look at Formidable specifically, but I was able to achieve streaming with busboy
. The above example is a little strange because they never actually use file in uploadStream
which I suspect was at least one of the issues.
Actually, looks like Discourse is doing something like what you want and also using the new AWS lib-storage API.
https://sourcegraph.com/github.com/Discours/discoursio-webapp/-/blob/api/upload.mjs?subtree=true
@addy thanks for resplying. I'll take a look at that library, but I'm not sure if the example will work. As far as I can tell, the fileConsumer
would store all the data in a memory Buffer (https://sourcegraph.com/github.com/Discours/discoursio-webapp/-/blob/api/upload.mjs?L57) which wouldn't work well for large files. I tried using a custom fileWriteStreamHandler
to pipe the stream directly to an S3 upload request, but S3 complains about not knowing the content length.
Closest I've come is getting an upload working, but the data never arrives, so I end up with a 0 byte object in my bucket.
I'll take a look at lib-storage though. Maybe that'll do. Thanks for the advice :)
Got this working without writing to disk. lib-storage
was the key along with the custom fileWriteStreamHandler
import stream from 'node:stream';
import formidable from 'formidable';
import { S3Client } from '@aws-sdk/client-s3';
import { Upload } from '@aws-sdk/lib-storage';
const s3Client = new S3Client({
endpoint: process.env.S3_URL,
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY,
secretAccessKey: process.env.S3_SECRET_KEY,
},
region: process.env.S3_REGION,
});
/**
* @param {import('http').IncomingMessage} req
*/
function parseMultipartNodeRequest(req) {
const s3Uploads = []
return new Promise((resolve, reject) => {
const form = formidable({
fileWriteStreamHandler: function (file) {
const ws = new stream.PassThrough();
const upload = new Upload({
client: s3Client,
params: {
Bucket: 'bucket',
Key: file.newFilename,
Body: ws,
},
});
const uploadRequest = upload.done().then((response) => {
file.location = response.Location;
return file;
});
s3Uploads.push(uploadRequest);
return ws;
};,
})
form.parse(req, (error, fields, files) => {
if (error) {
reject(error);
return;
}
resolve({ ...fields, ...files });
});
})
}
I had to change a few things from my working code, but hopefully this sends folks in the right direction. Thanks again for the tip @addy