lirantal/is-website-vulnerable

Idea: provide the tool with an authorization token or cookie to test deep pages of a website

lirantal opened this issue · 11 comments

Is your feature request related to a problem? Please describe.
Currently, you can only test pages which you have access to as an anonymous user. What if you wanted to test a deeply nested page as a logged in user?

Describe the solution you'd like
Being able to provide a cookie or an authorization token that will be passed to Lighthouse (needs to check if LH supports this) will allow users to test specific pages that are beyond the firewall

Hm, I wonder whether this would even be allowed?
I mean, the enterprises I worked in so far aren't happy to see information leaking out.
Especially to Google (German privacy obsession, I guess).

How is google related and what information will leak out?

I mean, with an Auth cookie, you poke a hole in a firewall to let someone else in, doesn't you?
Since JS is „active”, this would allow to run arbitrary scripts.

I mean, yeah, you could say you trust Google to not be evil, but…
It leaves a bad taste for me.

Information like your infrastructure. I imagine even use cases for staging environment here.

Could also be checks for logged in users. But there, you normally have some kind of personalisation. Can you even cover everything?

What if you could run the check „offline“ or without connecting to the Internet?
Like, could I just download a list of known vulnerable libs and feed the tool that one?

I'm thinking out loud here :-)

Happy we're having this conversation so definitely thanks for chiming in.

I mean, with an Auth cookie, you poke a hole in a firewall to let someone else in, doesn't you?
Since JS is „active”, this would allow to run arbitrary scripts.

Maybe I'm missing something but I'm not sure this statement's relevancy to the project here.
Teams who leverage the tool in a CI/CD may not be outside of the corporate firewall, and even if they are, how is something like this tool different than running any other E2E test on a system? it's not :)

I mean, yeah, you could say you trust Google to not be evil, but…
It leaves a bad taste for me.

I think there's some misunderstanding in perhaps how this tool works. It doesn't send anything to Google, nor to Snyk, not any other way that I know of. It doesn't collect any vulnerabilities from remotely or anything like that. See here https://github.com/GoogleChrome/lighthouse#are-results-sent-to-a-remote-server

Are results sent to a remote server?
Nope. Lighthouse runs locally, auditing a page using a local version of the Chrome browser installed the machine. Report results are never processed or beaconed to a remote server.

Okay. Thanks for explaining. I had a wrong mental model of how this tool works.

No worries at all! Thank you for discussing this :)

I was investigating about it. Seems like for now the best option to add this feature is using custom request headers. See: lighthouse Docs

In terms of coding I guest the best option is to add extra flags at CLI level like:

For Oauth (Barrer):
npx is-website-vulnerable https://example.com --token currentToken

For Cookies:
npx is-website-vulnerable https://example.com --cookie PHPSESSID=298zf09hf012fh2; csrftoken=u32t4o3tb3gg43; _gat=1;

What do you think @lirantal ? 🤔

@UlisesGascon exactly what I had in mind. And then you'll just pass either of these values into Lighthouse itself with the config options it supports, right? If so then perfect, let's do this 👌

Pr on the way #67 😎

hell yeah gif

Ha I love that gif ;-)

🎉 This issue has been resolved in version 1.13.0 🎉

The release is available on:

Your semantic-release bot 📦🚀