Add a configuration option to avoid consumption of magic links from security products
MicahParks opened this issue · 2 comments
Some email security products, such as "Safe Links" in Microsoft Defender for Office 365, will scan emails before they get to user inboxes, then follow all links programmatically. This program may or may not execute client side JavaScript or perform other user-like actions to ensure the linked web page meets an unspecified list of requirements. The diversity and evolution of security products makes this a difficult problem for email magic link authentication.
This is an old issue with email magic links. See this relevant issue for more resources:
FusionAuth/fusionauth-issues#629
All current releases of this project are subject to email magic link consumption by security products. The most current release when writing this issue is v0.1.1. As a consequence, users with affected email clients will find their email magic links consumed upon opening the email, resulting in a 404.
After spending some time researching this issue, I'd like to first try out a CAPTCHA implementation. CAPTCHAs can evolve alongside security products and it seems to be the most surefire way to prevent magic links from being consumed. However, I would strongly prefer a backend only solution.
I think Google's reCAPTCHA v3 would be the best candidate for this as it is "invisible", has a generous free tier, has low user friction, and is likely under active development. This choice comes with drawbacks. It will take longer for users to follow magic links, adds complexity, and Google would be processing magic link authentication data. This would also require the reCAPTCHA branding be visible in the user flow, which means in the email template.
If anyone would like to weigh in before the feature is added, please feel free to join the discussion.
I made a proof of concept frontend and backend that prevents robots from following a redirect. The page loads for a brief moment, then redirects. On page load, reCAPTCHA v3 is programmatically invoked in the HTML <head>
, then a POST request is made to the backend containing the reCAPTCHA token. The token is verified and the actual link is returned. The client side JavaScript performs the redirect. I may release a full example of this after this ticket is closed. This would be useful in other scenarios where robots need to be prevented from following redirects.
I still have a few items to test before publishing a pull request:
- Check to see if this works in Firefox.
- Check to see if this works in Safari.
- Use a free trial for "Safe Links" in Microsoft Defender for Office 365, if possible.