/NodeJs-MongoDB-on-Google-Kubernetes-Engine-GKE-

Highly Available Simple NodeJs App + MongoDB Deployed on Kubernetes in Google Cloud (GCP) (GKE)

Primary LanguageHCL

Deploy a highly Available NodeJs App Connected to MongoDB on Google Kubernetes Engine (GKE) Using Terraform & Jenkins

Architecture

In this project I will deploy a simple Node.js web application (stateless) that interacts with a highly available MongoDB (stateful) replicated across 3 zones and consisting of 1 primary and 2 secondaries.

Application Repo: Here

Notes:

  • Only the Management VM (private) will have access to internet through the NAT.
  • The GKE cluster (private) will NOT have access to the internet.
  • The Management VM will be used to manage the GKE cluster and build/push images to the Artifact Registry.
  • All deployed images must be stored in Artifact Registry.
  • Terraform will create infrastructure for VPC and Jenkins VM.
  • Two Jenkins pipelines:
    • 1st pipeline will use terraform to create the Management VM and GKE, then execute the 2nd pipeline.
    • 2nd pipeline will deploy NodeJS and MongoDB application on GKE.

Requirements

  • Terraform is installed on your machine.

  • GCP Account with Billing Activated.

  • Service Account with Project Owner Access for Jenkins VM (Create it manually through GCP webUI), preferably named (sa-gcp-proj-tf).

    If you chose another SA name you will have to change it in all project file using sed command as illustrated in Steps section.

    Jenkins + TF Service Account Jenkins + TF SA attached roles

  • Enable Service Usage API in GCP for Terraform to be able to communicate with GCP || Service Usage API Activation Link.

  • Create a Project in GCP and get its ID.


Steps

  1. Clone this repo.

    git clone https://github.com/Ziad-Tawfik/NodeJs-MongoDB-on-Google-Kubernetes-Engine-GKE-.git
  2. Open Jenkins-Infra-Terraform/dev-jenkins.tfvars to replace the following variables's data with yours using sed command as mentioned below:

    • SA account ID that you created before with owner role.
    • Project ID
    • Optionally: Jenkins VM Subnet's Region & Zone & CIDR.

    ! Note that Jenkins VM, Management VM and GKE are all in the same VPC as per architecture above.

    find </path/to/repo/folder> -type f -exec sed -i 's/<old-text>/<new-text>/g' {} \;
  3. Open Terraform/dev.tfvars to replace the following variables' data with yours using sed command as mentioned above:

    • Project ID
    • Optionally: Artifact Repo ID or Regions & Zones of Subnets & VMs.

    ! Note that changing project id, artifact repo id, region or zone then you will have to modify other files and replace all the old names with the new ones using sed command as mentioned above or any other utility.

    ! Note: You can change the password of the root login to the mongodb admin db by modifying Kube/mongokey.yaml, and edit the mongodb-root-password with your password encoded in base64.

  4. Push the project with your data to your repo.

  5. Open Bash shell in the cloned Jenkins-Infra-Terraform folder.

  6. Execute the below commands to let terraform build the infrastructure.

    terraform init
    terraform apply --var-file dev-jenkins.tfvars
  7. Review terraform plan and enter (y) if all is good.

  8. Get Jenkins VM IP from GCP UI, and access this IP on web browser through port 8080.

    Jenkins VM Jenkins 1

  9. SSH into Jenkins VM through GCP SSH-in-Browser or using command line as below to get the admin password mentioned in the above path and walkthrough the installation process installing the suggested plugins and creating an admin user.

    Jenkins 2 Jenkins 3 Jenkins 4 Jenkins 5

  10. Create a pipeline with any name to create the rest of infrastructure (Management VM + GKE + Artifact Repo), choose pipeline script from SCM, add your github repo and change the name of jenkins file script to Infra-Jenkinsfile

    Jenkins 6 Jenkins 7 Jenkins 8

  11. Create a second pipeline named appDeployJob with the same above steps but the script name is App-Jenkinsfile

    Jenkins 9 Jenkins 10

  12. Build the first pipeline and automatically after finishing it will start the second pipeline.

    Jenkins 11 Jenkins 12 Jenkins 13 Jenkins 14 Jenkins 15

  13. Authenticate Google Cloud on your machine using account has admin access if you want to access the management vm from your terminal,

    Skip steps 13,14 if you are going to login to the management vm from GCP webUI.

    gcloud init
  14. After jenkins and terraform creating the infrastructure, ssh into the management vm using below command with your project id and vm zone or from GCP webUI.

    gcloud compute ssh --zone "vm-zone" "management-vm" --tunnel-through-iap --project "your-project-id"
  15. Check Load Balancer External IP and open it using web browser to check if there is a counter or not.

    kubectl get svc

    Load Balancer External IP Counter in Browser

  16. Each time you refresh the page or a new client accessed the IP will increase the number of visits.

  17. We can verify the high availability of the infrastructure by taking down any pod of mongodb and check if the same IP has the same number of visits as before and increased by one.

    kubectl delete pod mongo-0

    Before Deletion After Deletion Counter in Browser after deletion

  18. To destroy the infrastructure, Build the infrastructure pipeline with destroy option checked.

    Jenkins Destroy

  19. To destroy the VPC and Jenkins VM, execute the below command in Jenkins-Infra-Terraform directory on your local machine.

    terraform destroy --var-file dev-jenkins.tfvars

What Happens Behind-The-Scenes❓

  • Terraform will Create two service account, one for the Management vm and the other one for the GKE cluster with the required permissions.

    Created SAs

  • Set up a Virtual Private Cloud (VPC), configure two Subnets, establish a NAT Gateway for outbound Internet access, define a Firewall Rule to enable IAP (Identity-Aware Proxy) access to the management virtual machine, and create an Artifact Registry to store Docker images.

    VPC Subnets Nat Gateway Firewall 1 Firewall 3 Artifact Registery

  • Provision a Management virtual machine, deploy a Google Kubernetes Engine (GKE) cluster with a node pool, and associate two service accounts with them.

    Management VM Attached SA to Management VM GKE Cluster

  • Startup script in the Management vm will clone this repo and create all required files in under /simple-node-app directory.

    • 🌳 Files tree layout.

    VM /simple-node-app tree

  • Executing run.sh located in /simple-node-app in the management vm, the following actions are performed:

    • Authenticate Artifact Registery and GKE on the management vm.

    • Build docker images: NodeJs, MongoDB, MongoDB Sidecar (which facilitates automatic MongoDB configuration).

      Run 1

    • Push the created images to the Artifact Registery.

      Run 2

    • Apply all yaml files found under /simple-node-app/kube to the GKE

      Run 3 Run 4 Run 5


🧙‍♂️ Author

Zyad M. Tawfik