echocat/caddy-filter

filter rule obliterates fastcgi response body

mwpastore opened this issue ยท 15 comments

Using a filter rule with fastcgi causes the response body to be blanked out and the Content-Length set to zero. The same filter rule works fine elsewhere, including with proxy.

This appears to be resolved with the buffering regression fix in Caddy 0.9.5. Nope, I spoke too soon. It's still happening in some cases. I will try to narrow it down.

It drops the body of some FastCGI responses, but not others. However, I'm struggling to determine the exact difference(s) between working and non-working payloads that's triggering this behavior. Any suggestions on how to best capture FastCGI responses for debugging purposes would be greatly appreciated.

Okay, I've narrowed down the cause: If errors are present in the FastCGI response, and a filter rule directive is present in the same block as the fastcgi directive, the response body is dropped and the Content-Length is set to zero. This is colloquially known in the PHP community as a "white screen of death."

Interestingly, if the FastCGI response with errors is passed downstream (i.e. to an intermediate vhost) and the filter rule is applied there, this issue does not present itself. This would seem to indicate that errors can be separated from the rest of the FastCGI response, but this is not done (correctly) in all cases. In the short term, setting fastcgi.logging=0 in php.ini prevents this issue from occurring, but clearly it would be better for Caddy to extract and capture any FastCGI errors for its error log before passing the response body to filter rule for processing.

Hi @mwpastore!

Could you check if #3 also fixes this issue? Beware that you are required to build it manually in a clean environment or provide me a easy way to mock fastcgi also in Golang that I'am able to add this also to the integration tests (like proxy and markdown). ๐Ÿ˜ƒ

Unfortunately, this issue is still present after applying the fix. The best I can do is provide you with a Docker to point Caddy at. Would that do the trick for you?

Caddyfile:

localhost:2015

tls off

filter rule {
  content_type .*
  search_pattern a
  replacement bb
}

fastcgi / 127.0.0.1:9000 php {
  root /var/www/html
}

index.php:

<?php
error_log("This error will break caddy with caddy-filter", 0);

echo "a hello, world!".PHP_EOL;

test.sh:

#!/bin/sh

docker run -d --rm \
  -p 127.0.0.1:9000:9000 \
  -v $(pwd):/var/www/html \
  --name evil_php \
  php:fpm-alpine php-fpm "$@"

caddy &
caddy_pid=$!

sleep 5

curl -isS http://localhost:2015

kill -s QUIT $caddy_pid
wait $caddy_pid

docker stop evil_php

Output:

$ PATH=~/Downloads/caddy_darwin_amd64_custom:$PATH ./test.sh
0e563bafb231af5548fe2842b502ce667c1b2b98e29a130db2379e4d6ecf8524
Activating privacy features... done.
http://localhost:2015
WARNING: File descriptor limit 256 is too low for production servers. At least 8192 is recommended. Fix with "ulimit -n 8192".
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Server: Caddy
X-Powered-By: PHP/7.1.1
Date: Fri, 27 Jan 2017 19:48:09 GMT
Content-Length: 0

evil_php

Output w/ workaround:

$ PATH=~/Downloads/caddy_darwin_amd64_custom:$PATH ./test.sh -d fastcgi.logging=0
a272f4f4bb22c305761508f84ebcec020736e5dc32f9c7546180cddc7a52bc17
Activating privacy features... done.
http://localhost:2015
WARNING: File descriptor limit 256 is too low for production servers. At least 8192 is recommended. Fix with "ulimit -n 8192".
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Server: Caddy
X-Powered-By: PHP/7.1.1
Date: Fri, 27 Jan 2017 19:47:55 GMT
Content-Length: 17

bb hello, world!
evil_php

Now I understand the problem fully and I'am able to reproduce it. At first I try to create a mock test version of this and then I try to fix it.

Thanks for your support.

That's incredibly exciting news. Thanks Gregor!

Hey @mwpastore!

Could you please verify this fix? If it works I will check the caddy download pages the next days and will close this issue until I see it also working there.

-- It way caused by a unexpected behavior of the fastcgi module that produces error objects for log messages send by the FastCGI server (or in your case the PHP script). I added a special handling for that and added also an integration test to ensure that it still works. ๐Ÿ˜‰ I hope in the future there is a cleaner solution possible.

It works! Awesome job! Thank you again.

I created a issue at caddy issue tracker and hope we find a better way for that. ๐Ÿ˜ƒ
#1385 Transport of FastCGI log messages is unexpected

Hrrm, slight problem. Here's my expected response:

HTTP/1.1 301 Moved Permanently
Content-Type: text/html; charset=UTF-8
Location: https://www.powproductphotography.com/questions/
Server: Caddy
Status: 301 Moved Permanently
Strict-Transport-Security: max-age=31536000
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Ua-Compatible: IE=edge
X-Xss-Protection: 1; mode=block
Date: Sun, 29 Jan 2017 13:36:14 GMT
Content-Length: 0

Here's my response with this fix:

HTTP/1.1 200 OK
Accept-Ranges: bytes
Age: 0
Content-Length: 0
Content-Type: text/html; charset=UTF-8
Date: Sun, 29 Jan 2017 13:33:52 GMT
Location: https://www.powproductphotography.com/questions/
Server: Caddy
Status: 301 Moved Permanently
Strict-Transport-Security: max-age=31536000
Via: 1.1 varnish-v4
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Ua-Compatible: IE=edge
X-Varnish: 524552
X-Xss-Protection: 1; mode=block

It looks like it's taking the HTTP status code of the FastCGI response and returning it as a weird Status header instead of using it as the actual status of the response.

Thanks for your input. I will build an integration test as fast as possible that demonstrates this bug for better fixing.

Dear @mwpastore! I created a new issue #6 for it because this is another bug and already fixed it.

This issue is now fixed but remains open until I can verify that download page of caddy serves this fix.