/distributed-auth-code-flow

An experiment to run dotnet in a serverless environment while keeping authorization records in a distributed cache

Primary LanguageC#

dotnet serverless auth

This demo uses the State of Utah Open Id Connect and an Authorization Code flow with PKCE for authentication.

Once authenticated, auth is swapped to a dotnet managed cookie and the authentication tickets are stored in a distributed redis cache along with the data protection key to decrypt the auth cookie. This allows any dotnet process with access to the distributed cache to authenticate clients which is great for serverless or load balanced scenarios as no auth information is stored in memory.

This demo is built to run locally (with a redis installation or container), completely in docker, or in GCP with Cloud Run and a redis Memorystore.

Getting Started

Authentication Setup

  1. Request an apadmin.utah.gov app

  2. Create user fields in the schema tab

    • for this app there is a UserRole OPTION and administrator, etc OPTIONS
  3. Create a client for that app

  4. Add openid and app:{yourApp} as scopes

  5. Toggle Implied Consent on

  6. Select Authorization Code Grant type

  7. Add Redirection urls for your localhost or cloud run app

    • they will be in the form of https://localhost:5001/signin-oidc
  8. Click open on the app:{yourApp} and grant read access to the user fields

OpenId Connect Setup

  1. set the environment variable for the apadmin client id and secret

    • in development you can use dotnet user secrets

      dotnet user-secrets set "Authentication:UtahId:ClientId" "your id"
      dotnet user-secrets set "Authentication:UtahId:ClientSecret" "your secret"
    • docker-compose.override.yaml

       api:
         environment:
           - Authentication__UtahId__ClientId=
           - Authentication__UtahId__ClientSecret=

Memorystore (Redis) Setup

  1. set the environment variable the redis memory store connection

    • in development you can use dotnet user secrets

      dotnet user-secrets set "Redis:Configuration" "localhost:6379"
    • docker-compose.override.yaml

       api:
         environment:
           - Redis__Configuration=redis
  2. Open the ports for redis

    • docker-compose.override.yaml

      redis:
        ports:
          - "6379:6379"

IP Geolocation Setup

  1. Create an account with maxmind.com

  2. Generate a license

  3. Add the dotnet user secrets or set them as environment variables

    • dotnet user secrets

        dotnet user-secrets set "MaxMind:AccountId" ####
        dotnet user-secrets set "MaxMind:LicenseKey" "your license"
        dotnet user-secrets set "MaxMind:Timeout" 3600
        dotnet user-secrets set "MaxMind:Host" "geolite.info"
    • docker-compose.override.yaml

       api:
         environment:
           - MaxMind__AccountId=
           - MaxMind__LicenseKey=
           - MaxMind__Timeout=3600
           - MaxMind__Host=geolite.info

Docker Setup

For docker to work with this flow the dotnet developer certificate needs to be accessible to kestrel.

  1. Create a docker volume that points to your pfx store for the dotnet sdk

    • docker-compose.override.yaml

      api:
        volumes:
          - ${HOME}/.aspnet/https:/https
  2. Generate a dev cert to use

    dotnet dev-certs https ${HOME}/.aspnet/https/auth-ticket.pfx -p some-password
  3. Add the environment variables to use the certificate

    • docker-compose.override.yaml

      api:
        environment:
          - Kestrel__Certificates__Default__Password=some-password
          - Kestrel__Certificates__Default__Path=/https/auth-ticket.pf
  4. Tell docker what ports to allow traffic on

    • docker-compose.override.yaml

      api:
        ports:
          - 5001:5001
        environment:
          - ASPNETCORE_URLS=https://+:5001

Building

  • Using VS Code

    • Run the Build task
  • Using the scripts

    ./scripts/build.sh
  • Using docker compose

    docker-compose build

Running

  • Using VS Code

    • F5
  • Using docker compose

    docker-compose up

Publishing

The publish script pushes the docker image to GCR

  • Using the scripts

    ./scripts/publish.sh

Infrastructure

  1. Initialize terraform

    cd _infrastructure
    terraform init
  2. Stand up infrastructure

    terraform apply

Cloud Run

  1. Choose the image from GCR

  2. Capacity

    • Memory 128 MiB
    • CPU 1
    • Request Timeout 10
    • Maximum requests per container 250
  3. Autoscaling

    • Minimum 0
    • Maximum 4
  4. Connections

    • VPC Connector
      • Choose the serverless VPC connector
  5. Use the same environment variables as you would for docker but use the real memory store ip and port.