This project automates the removal of users from our services as part of the offboarding process.
Useful links
TODOs
- Update personal AWS IAM acct to use new custom role defined here. Currently it has admin.
- Push to personal GitHub repo
- Bootstrap CDK into sandbox acct (profile: TEMP-personalAwsAcct)
- Obviously this should just be infra-as-code and not include anything AC3 specific.
- Set up prettier
- Build
QueueOffboardingStack
(simple) - Build
ProcessOffboardingStack
(more complex) - CI/CD tests and code quality
- CI/CD deployment
- Implement Lambda PowerTools
- Write helper functions for DDB queries
- Write quick dev helper script to populate/reset local DDB with sample data
- Write tests for all functions
- Report on test coverage
- ~Write tests for all CDK stacks?
- Set up sample events for local dev
- Set up
npm run
scripts for local dev (lambda, DDB, and step functions) - Make doco great (keep in sync with Confluence doc)
- Add repo badges
- Tag all constructs. What tags will we use? (ie. team:cas, repo:github.com/{}...}, env:prod, service:offboarding-automation, etc.)
- Make service connectors reusable? (ie. Atlassian, Zoom, etc step functions)
Useful commands
npm run build
compile typescript to jsnpm run watch
watch for changes and compilenpm run test
perform the jest unit testscdk deploy
deploy this stack to your default AWS account/regioncdk diff
compare deployed stack with current statecdk synth
emits the synthesized CloudFormation template
At a high level this project consists of two decoupled processes that are implemented as two CDK stacks:
- Queuing a user for offboarding –
QueueOffboardingStack
- Processing the offboarding task at a later date -
ProcessOffboardingStack
CDK stack name: QueueOffboardingStack
flowchart LR;
service_now("<b><u>SNOW webhook</u></b>
- email
- offboarding_date
- snow_id
");
lambda[["
<b><u>AWS Lambda: WebhookHandler</u></b>
With function URL
"]]
ddb[("
<b><u>AWS DDB: QueueTable</u></b>
- email: string (PK)
- status: enum (GSI1PK)
- offboarding_date: date (GSI1SK)
- snow_id: id
- created_at: date
- updated_at: date
")]
service_now --> lambda
lambda --> ddb
The table needs to support two primary access patterns:
- Get a single item by email address for updating the offboarding_date
- Get all items with a status of
QUEUED
for a given date / date range (or any other valid status)
To support this we'll use a global secondary index (GSI).
- PK (partition key): email address to be offboarded
- Format: string -
anne@email.com
- It would be an error to ever have more than one record with the same email. Having a PK without an SK enforces this1
- Format: string -
- GSI1PK (global secondary index #1 partition key): status
- Possible values:
QUEUED
|IN_PROGRESS
|ERROR
|SUCCESS
- Possible values:
- GSI1SK (global secondary index #1 sort key): offboarding_date
- Format:
YYYY-MM-DD
- GSI-PK / GSI-SK combinations are allowed to be non-unique. This will allow
us to have different users with the same status (
QUEUED
) and offboarding dates, which is desirable.
- Format:
This primary key design allows the following queries to be made efficiently:
- Individual user by email (PK = email address)
- All
QUEUED
tasks (GSI1PK = QUEUED) - All
QUEUED
tasks whereoffboarding_date
is today (GSI1PK = QUEUED & GSI1SK =2023-03-01
) - All
QUEUED
tasks whereoffboarding_date
is this month (GSI1PK = QUEUED & GSI1SK begins with2023-03
) - All
QUEUED
tasks whereoffboarding_date
is in the past (GSI1PK = QUEUED & GSI1SK <2023-03-01
)
CDK stack name: ProcessOffboardingStack
flowchart LR;
eventBridge("
<u>EventBridge</u>
Daily scheduled event
")
stepFunc(<u>AWS Step Function</u>)
eventBridge --> stepFunc
stepFunc --> Zoom[[Zoom]]
stepFunc --> Slack[[Slack]]
stepFunc --> GitHub[[GitHub]]
stepFunc --> Atlassian[[Atlassian]]
stepFunc --> LucidChart[[LucidChart]]
subgraph services ["Parallel Step Functions"]
direction TB
Zoom
Slack
GitHub
Atlassian
LucidChart
end
result{Success?}
Zoom --> result
Slack --> result
GitHub --> result
Atlassian --> result
LucidChart --> result
result --Yes--> success["
<u>Report:</u>
- update original SNOW task
- Slack notification
"]
result --No--> failure[Notify CAST team<br>about with failure details]
Note – each parallel step function will need to:
- Query for the user by supplied email address
- Deactivate user
- Revoke andy access tokens
Desirable feature: each service connector (Atlassian, Zoom, etc) should be designed to be reusable by other projects.
This project uses dotenv to manage environment variables.
Note
You will need to create a.env
file in the root of this project, or set the required environment variables another way.
This should not be checked into version control.
See .env.template file for required values and documentation.
This project uses Prettier to ensure a consistent code style.
Footnotes
-
GSI-PK / GSI-SK combinations are allowed to be non-unique. However, PK / SK combinations on the main table are not. ↩