Storing OAuth token in secrets manager?
michaelstepner opened this issue · 19 comments
I'm opening this issue to inquire whether you would be open to merging a PR that implements an option to store the OAuth token in a Secrets Manager (e.g. AWS Secrets Manager, Github Secrets or Terraform Cloud Workspace Variables) instead of the local config. This would facilitate managing OAuth authorization and token creation from a local computer, while making the token seamlessly available to a remote server running email-oauth2-proxy. It would also facilitate persisting the token if the remote server needs to be destroyed and recreated.
As mentioned in a prior issue, I've been working on an infrastructure-as-code project which uses Terraform to automatically configure an AWS server running email-oauth2-proxy. I've just made a first version of my work public at: michaelstepner/email-oauth2-proxy-aws.
As you might anticipate, the biggest limitation of my automated server configuration is authenticating with the email provider to obtain OAuth tokens. Thanks to your prior work, after creating the lightweight server I'm able to authenticate manually via SSH using --no-gui and --local-server-auth. But I would love to configure a more elegant solution someday.
I wouldn't ask you to implement this yourself, of course, unless you were interested in doing so.
Implementation
I considered a command-line argument, but I'm currently envisioning a per-account configuration option like token_store below:
[your.office365.address@example.com]
permission_url = https://login.microsoftonline.com/common/oauth2/v2.0/authorize
token_url = https://login.microsoftonline.com/common/oauth2/v2.0/token
oauth2_scope = https://outlook.office365.com/IMAP.AccessAsUser.All https://outlook.office365.com/SMTP.Send offline_access
redirect_uri = http://localhost
client_id = *** your client id here ***
client_secret = *** your client secret here ***
token_store = aws_secrets_manager
If that configuration option were left unspecified, it would default to local and the behaviour would follow the status quo, storing the token in the config file. This would preserve backward compatibility, and maintain simplicity for the majority of users who are using this software as a local client in their system tray.
Additional configuration required for the secrets manager could either be via environment variables (e.g. credentials for access to the secrets manager) or the configuration file.
Thanks for this suggestion!
I'll readily admit that I have essentially zero prior experience using a Secrets Manager, so would rely on you or others to implement anything like this. My key concern before merging would be about overcomplicating the proxy – I'm quite keen to keep it as simple and focused as reasonably possible, so if it's a straightforward change to add this capability then I'd be open to it, but if it would end up drawing in lots more dependencies and adding lots of extra complexity then I'd be less keen.
How would you actually store the four required OAuth values? (access_token, access_token_expiry, refresh_token and token_salt). The set of valid email address characters doesn't match the set of permitted key name characters in the secrets managers I looked at, so you'd need some sort of mapping here (or an additional value in the configuration file).
I'm also interested in how you'd surface the token request to the user. I agree that being able to manage this remotely without needing to ssh into a server and authenticate is a benefit, but what would the interaction flow look like from the client's point of view? Would they have to manually generate and process the authentication request?
As it happens, I was recently thinking about how to achieve this with a plugin (not yet committed, but see the plugins branch for the basic capabilities) – essentially it could trigger an email to the account when there is an authentication request, and then intercept an email reply from the user with the updated authentication response in the body. This is a little similar to how Gmail allows security alert emails to be retrieved via IMAP even when everything else is blocked because of that security alert. Of course, this is a bit of a hack, but it makes the user side very easy, which is I think an important factor.
I'd be interested to hear your thoughts about all of this. (It's great to see your project building on this one, too :-)
Thanks for your thoughtful and thorough reply! I'll respond in-line:
My key concern before merging would be about overcomplicating the proxy – I'm quite keen to keep it as simple and focused as reasonably possible, so if it's a straightforward change to add this capability then I'd be open to it, but if it would end up drawing in lots more dependencies and adding lots of extra complexity then I'd be less keen.
I think the extra dependencies and complication would be minimal.
- Dependencies: For example, to use AWS Secrets Manager would bring in a dependency on
boto3. - Code complications: I think changes would be limited to
OAuth2Helper.get_oauth2_credentials. In my quick skim, it seems this function is solely responsible for fetching and storing the OAuth2 token. Adding the option for a secrets manager would mean fetching and storing from a difference source.
How would you actually store the four required OAuth values? (
access_token,access_token_expiry,refresh_tokenandtoken_salt). The set of valid email address characters doesn't match the set of permitted key name characters in the secrets managers I looked at, so you'd need some sort of mapping here (or an additional value in the configuration file).
One possible design would be to store all tokens in a valid JSON string, like:
{
"email1@example.com":{
"access_token":"QWERYUIOP",
"access_token_expiry":1234567890,
"refresh_token":"ASDFGHJKL",
"token_salt":"ZXCVBNM"
},
"email2@example.com":{
"access_token":"QWERYUIOP",
"access_token_expiry":1234567890,
"refresh_token":"ASDFGHJKL",
"token_salt":"ZXCVBNM"
}
}The key for the secret could either be:
- hardcoded
- optionally configurable on an application-wide basis
- via an env variable, or
- via a config option within [Server setup]
I'm also interested in how you'd surface the token request to the user. I agree that being able to manage this remotely without needing to ssh into a server and authenticate is a benefit, but what would the interaction flow look like from the client's point of view? Would they have to manually generate and process the authentication request?
I was thinking that a user could install email-oauth2-proxy locally and remotely, configuring both to use the same secrets store. Then if you successfully authenticate locally, the tokens are stored in the secret store and immediately available to the remote server.
I think this idea was inspired by rclone authorize, which I've used in my work.
As it happens, I was recently thinking about how to achieve this with a plugin (not yet committed, but see the plugins branch for the basic capabilities) – essentially it could trigger an email to the account when there is an authentication request, and then intercept an email reply from the user with the updated authentication response in the body. This is a little similar to how Gmail allows security alert emails to be retrieved via IMAP even when everything else is blocked because of that security alert. Of course, this is a bit of a hack, but it makes the user side very easy, which is I think an important factor.
This is really clever, I love this idea. I think it would be complementary to a secrets manager, since the secrets manager would still facilitate persisting the tokens across multiple servers (ex: a local and remote server, or re-creating the remote server without re-authenticating). However, it might entirely obviate the need for a secrets manager, since re-authenticating a remote server would become much less of a hassle.
One question I have about this idea is how you'd trigger an email to the account before the user has authorized and the app has received OAuth2 tokens to access the account. I foresee a potential chicken-and-egg problem.
Thanks for clarifying about storing the token details – this is much more straightforward than I originally assumed. An obvious key here would be the proxy's package name, but it could easily be configurable as you suggest.
I think the local and remote installation idea is really neat! But I don't quite understand how this would work seamlessly in practice – you would remove the hassle of authenticating the remote server, but add a new step of reconfiguring your email client/app's server details for each authentication request (because you'd need to connect to the local installation to authenticate, then switch to the remote one that you actually want to use). Of course, this is probably slightly easier to do than remotely authenticating, but it's still an extra manual step. I suppose the real benefit here would be if you intend use the remote server with many clients, and are happy to have to set up a single one locally solely to monitor authentication state.
Re: authenticating via an email: the way this would work would be an ultra-simple IMAP/SMTP plugin that supports only the operations required to present a single email with the authentication request, and intercept a single response with the token. Essentially, from the view of your email client, the email account is taken over temporarily to make it seem like this is its only capability/purpose. After the process completes, the plugin is removed and real email communication resumes. The SMTP side is straightforward; IMAP has some additional challenges around not disrupting the local state during reauthorisation (though not insurmountable). But I think the key challenge here would actually be security: the plugin would need to accept all incoming connections as it has no way to determine which ones are valid (because it has not yet been authorised). On a locally hosted proxy this is low risk, but in the sort of setup you are hoping for, allowing external – and potentially malicious – communication to modify the proxy's configuration is a clear risk compared to other approaches. One way to mitigate this would be to have a shared key set in the configuration file that is used as some sort of verification when using this mode. It's a little hacky, but it's probably relatively safe to assume that anyone with access to the configuration file is trusted enough to connect to the proxy.
Between something like this and the POP branch, this sounds like exactly what I was looking for. GMail still hasn't implemented OAuth2 for GMail POP3 and I'm about to lose access to my personal O365 tenant email because Microsoft is deprecating basic auth.
I was hoping to deploy email-oauth2-proxy as an Azure Function/AWS Lambda but that would require a place to store the OAuth values, and a secrets manager is what I was thinking for that, but I hadn't thought about the implications and reading this thread I would have seriously underestimated them!
All that being said, I realized I may have jumped to a conclusion - how is email-oauth2-proxy designed to be/typically deployed? Maybe it's simpler to use a laptop or a Raspberry Pi or something and have it run locally.
The proxy was initially designed for my own personal use on a desktop/laptop because I was going to lose access to O365 through a client I would find it difficult to do without. As a result it is GUI-first, with a menu bar icon and interactive authentication screen. For this reason (and the other challenges discussed earlier in this issue thread) it could be quite painful to use a secrets manager effectively. So, yes, if you're able to use the proxy locally that may well be simpler.
Do you have a typo in your comment – "GMail still hasn't implemented OAuth2 for GMail POP3"? Gmail definitely does support OAuth for POP3, so perhaps you meant a specific client here? Regardless, the POP branch will almost certainly be merged once I've had chance to complete the implementation, so you should have full support via the proxy.
@billbliss You may wish to check out my work at michaelstepner/email-oauth2-proxy-aws. I built upon @simonrob's wonderful work on making a local client and deployed it to an EC2 instance for my use case: I wanted to provide access to O365 to my favourite cloud email provider which had not implemented OAuth2.
You were hoping to use a serverless framework like Azure Functions or AWS Lambda. That would indeed would be cheaper, but struck me as very difficult to implement for SMTP since it is not an HTTP-based API. I therefore used the cheapest AWS instance available to set up a server, and the final cost for me is around US$4 per month (no guarantees as to your costs, of course).
I've fully documented the installation steps in the README of michaelstepner/email-oauth2-proxy-aws, and it's 100% functional for me. Adding a feature to store credentials in a secrets manager would streamline the installation and quarterly TLS certificate refresh. But if you're willing to tolerate a bit of manual config via SSH, I think my installation guide would get you up and running — it's certainly no more manual work than setting up a local RPi server.
Sweet! I will take a look. A quarterly SSH isn’t too bad. Spaced on the “those protocols aren’t http” but obviously too. Thx!
Another option here could just be to synchronise the config file between instances of the proxy. You'd get the benefits of shared credentials but the proxy changes required would be far more minor.
@simonrob I think we've productively scoped out the various options here! I'm going to tentatively close this issue for the moment: feel free to reopen if you'd prefer to leave it open of course. My inclination is we have a clear scope of work and no imminent plan for implementation.
I'm not sure when I would get around to implementing—probably at a confluence of a light work week and me needing to do my quarterly TLS certificate refresh. And I don't think this is on your shortlist of next features that you would implement independently?
FYI @simonrob, I have a work-in-progress fork that implements an MVP of this feature, storing tokens in AWS Secrets Manager.
Before opening a PR to propose merging this feature upstream, I plan to:
- Test it for a while on my server to ensure it's working as expected
- Add code to seamlessly create the secret if it does not yet exist.
- Right now it assumes the secret exists and simply reads/writes to it...which is fine in my case because I use Terraform to handle the creation of an empty secret.
There's no need for you to review my work-in-progress, but I wanted to give you and anyone else looking at this issue a heads up! (especially since I see it's referenced every month or two from other issues)
My key concern before merging would be about overcomplicating the proxy – I'm quite keen to keep it as simple and focused as reasonably possible, so if it's a straightforward change to add this capability then I'd be open to it, but if it would end up drawing in lots more dependencies and adding lots of extra complexity then I'd be less keen.
I'm hoping this implementation is simple enough that it would meet your objectives!
- It adds an optional argument
--aws-secrets:
--aws-secretsenables the proxy to store OAuth 2.0 tokens remotely in AWS Secrets Manager, rather than storing them in the local configuration file. This will only be applied to accounts which have anaws_secretparameter configured in the local configuration file, containing the ARN or name of the secret. To use this feature, you must install the requirements inrequirements-aws-secrets.txtand set up authentication credentials for your AWS account.
- It adds an optional account-level configuration parameter,
aws_secret:
[your.office365.address@example.com]
permission_url = https://login.microsoftonline.com/common/oauth2/v2.0/authorize
token_url = https://login.microsoftonline.com/common/oauth2/v2.0/token
oauth2_scope = https://outlook.office365.com/IMAP.AccessAsUser.All https://outlook.office365.com/SMTP.Send offline_access
redirect_uri = http://localhost
client_id = *** your client id here ***
client_secret = *** your client secret here ***
aws_secret = *** ARN or name of secret in AWS Secrets Manager ***
-
It adds new code to
AppConfig._load()andAppConfig.save()to handle the option of loading/saving OAuth tokens from the remote AWS Secrets Manager. Around 40 lines of code total. -
The only new dependency is
boto3, and it is only loaded if the--aws-secretsargument is specified.
Excellent! Thanks for the work on this - great to see you found an approach with minimal impact to the proxy's core features. Let me know how you get on; happy to review fully after that with a view to merging.
Would it not be better to first have the tool store its state (the access tokens) somewhere other than the config file?
Right now, I need to make the config file writable to the tool while it runs, which is sub-optimal. Ideally, the tool should run with systemd's ProtectSystem=full or strict, which make the /etc dir read-only but allow /var to remain writable.
Separating configuration and caching has been requested before, but I don't personally have the need for it or the spare time to implement it I'm afraid.
I'll happily review a PR (or maybe accept sponsorship to add this), but it will need careful integration with the various existing features. Probably the simplest approach is to modify the load/save methods of the AppConfig class.
@thiagomacieira if you decide to pursue this further, I think the code in PR #114 (which is close to being merged) provides a good template for what changes are required to store the tokens elsewhere.
In fact, PR #114 does meet your objective of storing the access tokens somewhere other than the config file...but with the dependency on AWS Secrets Manager, whereas it sounds like you'd like to store them in a separate local file.
The other question raised would be where to store the lastactivity state. In PR #114, that remains stored in the local config, which remains writable (and written to) by the proxy. But it sounds like you'd want that separated out too.
@thiagomacieira it turns out that generalising #114 made supporting a local cache file much more straightforward, so this is now available in that PR. The exact configuration may change between now and merging, but it would be useful if you could test that feature.
Thanks, let me take a look!
The change looks good from reading. I'll test it soon too.
Confirmed, working just fine. The config file is now owned by root and is actually world-readable; the auth cache file is read-write, but only writable by the user running emailproxy.py.
I'll make a PR with the systemd service file I'm using.