payloadcms/payload

Cannot edit uploaded image when hosting Payload CMS behind a reverse proxy

felskov opened this issue · 16 comments

Link to reproduction

No response

Describe the Bug

The issue occurs because we're running the application behind a reverse proxy (nginx), which causes the getExternalFile fetch request to stall and time out. This happens due to bad forwarding of headers here:

const res = await fetch(fileURL, {
  credentials: 'include',
  headers: {
    ...req.headers,
  },
  method: 'GET',
})

When we dug into the issue, we realized that this caused you to forward all headers from the incoming request to edit the image. That request is a POST-request, which includes a request body. Therefore that request also contained a "Content-Length" header. But obviously we don't actually send a request body with the above GET-request, and therefore nginx never served the request (it kept waiting for a request body that never came).

I think you might want to be more explicit about which headers to forward (seemingly only cookies are relevant here?), so as to avoid causing issues with applications behind load balancers / reverse proxies / other?

To Reproduce

  1. Host Payload CMS behind a nginx reverse proxy
  2. Upload an image
  3. Try to edit the image

Payload Version

2.11.1

Adapters and Plugins

@payloadcms/plugin-cloud-storage/s3

Can confirm that this also affects uploads to S3 without the reverse proxy. It pops up as a different error:

FetchError: request to https://prod-public.s3.ap-southeast-2.amazonaws.com/media/image.jpg failed, reason: Hostname/IP does not match certificate's altnames: Host: localhost. is not in the cert's altnames: DNS:s3-ap-southeast-2.amazonaws.com, ...snip

Removing the headers block entirely from the fetch request lets us crop, and fixes the issue.

This might need to be configurable in the cases where you're relying on payload to provide access control, but if you have disablePayloadAccessControl: true, requests are made directly to a public S3 bucket both out of payload and your frontend.

In might be just enough to rely on the disablePayloadAccessControl config setting in this case?

I'm also having this issue, hosting as a docker container with traefik

@felskov Are you able to provide a simplified version of your nginx config, so I can try and recreate?

I'm not against offering additional configuration on the forwarded headers.

@denolfe Sure, it looks something like this:

server {
	server_name _;
	listen 80;

	client_max_body_size 10M;

	gzip		on;
	gzip_static	on;
	gzip_vary	on;
	gzip_proxied	any;
	gzip_types	text/plain text/css application/json application/javascript application/x-javascript text/javascript text/xml application/xml application/rss+xml application/atom+xml application/rdf+xml;

	location / {
		proxy_pass		http://127.0.0.1:8080; # or whatever port payload is listening for internally
		proxy_set_header	Upgrade $http_upgrade;
		proxy_set_header	Host $http_host;
		proxy_http_version	1.1;
		proxy_set_header	X-Forwarded-For $proxy_add_x_forwarded_for;
	}
}

Note however that we're using certbot for SSL termination on the proxy as well, I've not included that here :)

I know it's none of my business, but personally I would start from looking into a more selective forwarding of headers. It seems to me to carry lots of potential issues that you forward all headers "as is". I'm assuming it's to preserve authorization, but why not select the actually relevant headers and forward only those? Potentially wrapped in an "authorizedFetch" if this is something that's done in many places throughout the implementation.

this also happens for local files.

In this case, getExternalFiles should never be called.

try {
            if (url && url.startsWith('/') && !disableLocalStorage) {
                const filePath = `${staticPath}/${filename}`;
                const response = await (0, _getFileByPath.default)(filePath);
                file = response;
                overwriteExistingFiles = true;
            } else if (filename && url) {
                file = await (0, _getExternalFile.getExternalFile)({
                    req,
                    data: data
                });
                overwriteExistingFiles = true;
            }
        } catch (err) {
            throw new _errors.FileUploadError(req.t);
        }

url.startsWith('/') check is always false, as also local-file urls start with https.

I created a patch that checks if the url starts with serverURL and use the local file if thats true.

        const isLocalFile = url && url.startsWith(config.serverUrl);
        try {
            if (url && isLocalFile && !disableLocalStorage) {
                const filePath = `${staticPath}/${filename}`;
                const response = await (0, _getFileByPath.default)(filePath);
                file = response;
                overwriteExistingFiles = true;
            } else if (filename && url) {
                file = await (0, _getExternalFile.getExternalFile)({
                    req,
                    data: data
                });
                overwriteExistingFiles = true;
            }
        } catch (err) {
            throw new _errors.FileUploadError(req.t);
        }

Not sure if this covers all cases, but works for us