Add verification checks against $_SERVER header/env reads.
sybrew opened this issue · 11 comments
Is your feature request related to a problem?
Following the widespread vulnerability of WooCommerce Payments, I tried testing the vulnerable code against WPCS -- but WPCS reported nothing.
if ( ! isset( $_SERVER['HTTP_X_WCPAY_PLATFORM_CHECKOUT_USER'] ) || ! is_numeric( $_SERVER['HTTP_X_WCPAY_PLATFORM_CHECKOUT_USER'] ) ) {
return null;
}
return (int) $_SERVER['HTTP_X_WCPAY_PLATFORM_CHECKOUT_USER'];
Describe the solution you'd like
I suggest adding a similar check that exists for $_POST
: WordPress.Security.NonceVerification.Missing
.
Additional context (optional)
- https://developer.woocommerce.com/2023/03/23/critical-vulnerability-detected-in-woocommerce-payments-what-you-need-to-know/
- I have a HackerOne report (ID 1906271) involving Automattic's software on this.
The above contains too little information to see what the actual problem was code-wise, so as things are, it is impossible to determine whether your proposed solution has any merit.
Based on the limited information available, this feels off.
$_SERVER
variables are always available, though may vary depending on the server.
$_POST
and $_GET
variables are only available via the URL or via forms.
URLs and forms can be secured via a nonce. The server can not.
If anything, I can imagine a security warning about the use of $_SERVER['HTTP_*']
variables, but I can't imagine any way that a sniff would be able to check that the use of such a variable would be secured properly, so in that case we'd end up with a warning which can never be "fixed", only ignored via an inline ignore annotation after manual inspection.
Something like that is against the principle that errors/warnings thrown by sniffs should be fixable via code changes, so unless there are ways to mitigate this in the code which the sniff could reliably check for, this is not a typical candidate for a WPCS based detection mechanism.
Like $_POST
, $_GET
, and $_REQUEST
, indexes of $_SERVER
can be populated by untrusted sources with arbitrary data. It can be used anywhere in the code, so a safe code path won't help; therefore, these superglobals indexes' data must be handled carefully. It is why we add warnings around the first three, but they're missing for the last one. I currently cannot give concrete examples from WordPress without revealing the contents of the HackerOne report.
Not only HTTP_*
is mutable, but also PHP_SELF
is subject to arbitration by simply changing the request URI (explained here).
According to this test, many other indexes can be overwritten via the environment, whatever that may invoke or resolve into (explained here).
The example I shared in my previous comment returns the X-Wcpay-Platform-Checkout-User
HTTP header value to the determine_current_user
filter. This allowed anyone visiting the website to log in as an administrator by simply setting that value to 1
(or whatever the admin ID of the site is). A mere warning of the header's arbitrary nature could have prevented the code from being implemented.
So, I propose adding caution around using the superglobal. As you're imagining, the warning suggests writing a comment explaining why it is used, whether it's properly sanitized, and what measures are taken to ascertain the validity of the data. It should only be disabled via a // phpcs:ignore
comment. I understand why you are not in favor of this.
Again, after my HackerOne report is patched and disclosed, I can show real-world examples of the broader implications that may make you consider making an exception to the principle. Unfortunately, sanitization functions won't satisfy; only proper code paths.
The issue is not with $_SERVER
though, since it generally is not populated by the user. The issue is ONLY when accessing HTTP headers via $_SERVER['HTTP_
without validating that (e.g. with current_user_can
+ wp_verify_nonce
)
In terms of sanitizing, that code is fine.
The vulnerability I mentioned in my previous reply is fixed and released to the public, so I can give another real-world example.
Again in an Automattic plugin, these changes in Jetpack remove calls to wp_is_json_request()
and wp_is_xml_request()
, for they are untrusted:
Automattic/jetpack@0a3f935#diff-569b9a627b3b65759e6e3e44a97e6c34f3f4f1c68c4843dea4080e428005a2a5R492-R496.
There's no workaround implemented, just a simple drop of functionality.
Aside: The vary
stuff in that same commit should be in Core, and Jetpack does nothing with it.
Their changelog reads:
Security: ensure blocks are always fully displayed on your site, even when using a caching plugin.
The reason is detailed in HackerOne report 1906271 (not sure if it's public, so eh):
If we look at wp_is_json_request(), we get this code:
function wp_is_json_request() {
# extraneous jump:
if ( isset( $_SERVER['HTTP_ACCEPT'] ) && wp_is_json_media_type( $_SERVER['HTTP_ACCEPT'] ) ) {
return true;
}
# extraneous jump:
if ( isset( $_SERVER['CONTENT_TYPE'] ) && wp_is_json_media_type( $_SERVER['CONTENT_TYPE'] ) ) {
return true;
}
return false;
# extraneous newline
}
In that function, there are no checks against attacks other than wp_is_json_media_type()
. If we look deeper, in wp_is_json_media_type(), we get this code:
function wp_is_json_media_type( $media_type ) {
static $cache = array(); # [] is not a function
if ( ! isset( $cache[ $media_type ] ) ) {
$cache[ $media_type ] = (bool) preg_match( '/(^|\s|,)application\/([\w!#\$&-\^\.\+]+\+)?json(\+oembed)?($|\s|;|,)/i', $media_type );
}
return $cache[ $media_type ];
}
Practically, that performs a literal string test on an arbitrary value without any authentication for integrity. You can set HTTP_ACCEPT
to application/json
, and wp_is_json_request()
will return true
.
wp_is_xml_request()
suffers from the same bug.
The functions wp_is_json_request()
and wp_is_xml_request()
were used in Jetpack as-is, and other smaller plugins are also affected.
To conclude, WordPress Core is vulnerable to manipulation via $_SERVER['HTTP_*']
(see CWE-454 and CWE-473), but the Core developers are not aware of this (yet) -- though I hoped Automattic employees would have forwarded this upstream. Hence I proposed a new PHPCS test to make everyone wary that using $_SERVER['HTTP_*']
is as dangerous as trusting $_GET
et al., and hopefully get this fixed upstream with the proper validation.
I consider those two functions as dangerous as is_admin()
, though is_admin()
received more attention regarding its weakness over the years due to its ambiguity (is_super_admin()
is what devs thought it would act like).
To get back to this notion:
Unless there are ways to mitigate this in the code which the sniff could reliably check for, this is not a typical candidate for a WPCS based detection mechanism.
@kkmuffme is right; this could be resolved well with a current_user_can()
test; or otherwise proof of origin (nonce, referer checks).
To summarize:
using $_SERVER['HTTP_*']
should trigger/require a wp_verify_nonce
(or similar) check requirement just like using any $_GET['whatever']
FYI: I've been thinking about this one and for now, my conclusion is that the specs for what should be checked for are too unclear to make this actionable at this time.
If the specifications can be defined more precisely, we can revisit.
are too unclear to make this actionable at this time.
As mentioned above, any $_SERVER['HTTP_
and filter_input( INPUT_SERVER, 'HTTP_
should be checked
@kkmuffme That's not the part which is unclear... the part which is unclear is what mitigation should be accepted, whether the mitigation only applies to these keys (and therefore would need to be special cased) etc
A good set of code samples with "good" and "bad" code patterns may help clarify things.
There are actually multiple security issues that are connected and extremely often done wrong in tons of plugins:
-
wpcs should have a rule to check for
current_user_can
when checking for nonce (since this is not done in tons of plugins, making nonces only partially useful).
Not sure what the best way for that would be - perhaps if any POST/GET/SERVER/COOKIE value is used, except within sanitizing functions (+ isset/empty/count/...), where current user can hasn't been checked, it should given an error.
This should also apply to anything from filter_input( INPUT_POST/GET/SERVER/COOKIE -
the existing nonce rule should also apply to SERVER/COOKIE superglobal (and filter_input), except for a hardcoded list of keys
https://gist.github.com/Pierstoval/f287d3e61252e791a943dd73874ab5ee
Technically, restricting to HTTP_ + a few others (content_type,...) would be sufficient, but I think using the "safe" approach is better by disallowing everything, except specified.
- 1 and 2) should also apply to insecure WP core functions, e.g. https://developer.wordpress.org/reference/functions/wp_is_json_request/
When using that, you MUST use nonce/current user can, as this function is inherently insecure.
Good: anything that has current_user_can
+ nonce checked when $_SERVER/$_POST/... is used
Bad: anything that does not have current_user_can
+ nonce checked when $_SERVER/$_POST/... is used.
Now you will obviously say - current_user_can
doesn't make sense on the frontend or some wp-json requests, so the only way to fix this, is to phpcs:ignore.
But this is the point of this: you can NEVER use the headers in SERVER (or anything of POST/GET) of an unauthorized user for any logic, since it can contain an arbitrary value from the client/user.
e.g. tons of developers/plugins added nonce checks, thinking that this is safe:
if ( isset( $_GET['foo'] ) && wp_verify_nonce( ... ) ) {
return true;
}
But obviously it's not.
There is no general way to make this safe for not logged in users, as it depends on the circumstances, and there is no automatic way to check for that.
E.g. for the https://developer.wordpress.org/reference/functions/wp_is_json_request/ case, you'd have to additionally validate that it's a wp-json REQUEST_URI and then if you cannot use current user can validation (bc it's a public endpoint), you must not use any GET/POST/SERVER to access data that you return to the user (as otherwise I can request arbitrary data by changing modifying the request)
@kkmuffme Correct me if I'm wrong, but by your logic above, any kind of website search would be made impossible, Along the same lines, an anonymous user submitting a contact form would also be impossible....
Not the contact form/search itself is unsafe - any logic that is built around the user submitted data, which then is used in a context that would normally require authentication.
In the above jetpack security issue, the problem is that wp_is_json_request
is used to infer that the request is not on the frontend using a user header = the request is on the backend = we need current user can validation.
The jetpack code and wp_is_json_request()
is exactly like doing this: (which nobody would do, as it's obvious how this is unsafe)
function allow_delete_site() {
if ( $_COOKIE['is_admin'] === 'yes' ) {
return true;
}
return false;
}
if ( allow_delete_site() ) {
// delete the whole site
}
It's not about the values, but about how/where they're used. That's exactly the reason why there is no "automatic" way to check for it being safe with wpcs for unauthorized users, but we have to rely on phpcs:ignore, where "current user can" cannot be used, and a 2nd, use case specific validation method needs to be used (e.g. REQUEST_URI,...) that cannot be modified by the user for a specific action in that case (without current user can)
I guess this is also why e.g. check_admin_referer
used the referer (2nd factor) at some point (obviously it's unsafe/has the same issue, since it's from userland, but this was the initial intention to counter this issue).
Let me give you a very obvious example with $_GET, where you could accidentally end up creating a security issue without being aware, when user data is used to trigger a specific logic.
function is_my_plugin_settings_page() {
if ( isset( $_GET['my-settings-page'] ) ) {
return true;
}
return false;
}
function load_js() {
if ( !is_my_plugin_settings_page() ) {
return;
}
wp_localize_script( 'foo', array( 'apiKey' => '123456', 'oauth_token' => '...' ) );
}
Suddenly anybody is able to get the apiKey/... by just setting the $_GET parameter.
In this case, wpcs will require a nonce for the $_GET, but the same issue can happen with $_SERVER, as we can see: https://developer.wordpress.org/reference/functions/wp_is_json_request/
That's the reason why using wp_is_json_request
should also trigger the nonce + new current user can rule.
Why using a nonce in the above case is not sufficient, but a current user can check is required is obvious I guess? (if not, I'm happy to expand on that though) That's why a wpcs rule for current user can is needed.
Mitigation is possible by extending the nonce rule to SERVER/COOKIE (or is cookie already included?) superglobals
And adding a 2nd rule for current_user_can
validation - for unauthorized requests, this rule will always give an error that can only be phpcs:ignore, as the mitigation depends on the use case.