/microservices-demo

Sample cloud-native application with 10 microservices showcasing Kubernetes, Istio, gRPC and OpenCensus. Provided for illustration and demo purposes.

Primary LanguageC#Apache License 2.0Apache-2.0

Hipster Shop: Cloud-Native Microservices Demo Application for Sysdig Anthos Event

Setup (to be done prior to the demo):

  1. Create a cluster using the ./create-cluster.sh script
  2. Apply the sysdig agents cd sysdig-agents && ./sysdig-agents-GKE-install.sh && cd ..
  3. Run the hipsterapp script ./hipsterapp.sh

Demo1: CI/CD with scanning:

  1. Show the GCP console with the newly created cluster
  2. Go to the Applications tab and talk about how to deploy the Sysdig agent directly from there. Also talk about eBPF and how it is used to deploy for COS.
  3. Mention the different ways to deploy our agent: As an application, operator, helm charts, and kubectl commands. We deployed this ahead of time using kubectl commands.
  4. Show the hipster app by going to the Services tab and clicking on the loadbalancer IP of the frontend service.
  5. Go to the powerpoint and explain the workflow of the CI/CD pipeline. Also explain the 4 bulletpoints in the powerpoint slide that cover the next two points below.
  6. The Jenkins pipeline is built only for the frontend microservice. So you can go to src/frontend and make changes there.
  7. Add a vulnerability to the frontend microservice under src/frontend, expose port 22 in the Dockerfile, add an env variable with a password and add file with a private key, and use an old version of alpine (3.4). You can just uncomment the comments in the Dockerfile and comment others as appropriate
  8. Run git commit -am 'vuln introduced' and the git push
  9. Show the pipeline progress in Jenkins https://54.208.144.191/jenkins/job/hipster-frontend/
  10. Show the failure report.
  11. Show the scanning in GCR under sysdig-anthos-demo (you have a dev and prod repo) and highlight the difference between the vulnerability scanning
  12. You can make changes to src/frontend/templates/header.html and change the page title under the head section. Change the title to Hipster Shop VERSION 2.0 under under <a href="/" class="navbar-brand d-flex align-items-center"> This should already be done for you.
  13. Uncomment and comment to get the Dockerfile back in shape
  14. Run git commit -am 'vuln removed' and the git push
  15. Show the Jenkins pipeline succeeding
  16. Show the Hipster app with the new Hipster Shop VERSION 2.0 title.
  17. Finish off by showing the Sysdig Scanning Policy that was in place. It's called GoogleEventDemo

Demo2: Performance:

  1. Show the Topology map at the bottom of the K8s Golden Signals for Hipster dashboard and show that things are going well.
  2. Run the command: kubectl delete deployment checkoutservice
  3. Show that the app broke by going to the Hipster app and try to purchase something. This will also generate a 500 error for the capture file. Sometimes the load generator doesn't generate this in time.
  4. To see the performance degradation in Response Time and Error Rate. Check the K8s Golden Signals for Hipster dashboard and change the time scale between 1 minute, 10 minutes and 10 seconds.
  5. This can be skipped depending on time -- Check the capture file and go to HTTP Errors and drill in. Show the connection problem to the given IP and port. Then run kubectl get svc to show that the frontend service can't talk to the checkout service. The error looks like rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = &#34;transport: Error while dialing dial tcp 10.35.246.154:5050: i/o timeout&#34; failed to complete the order
  6. Run the command: kubectl apply -f release/kubernetes-manifests.yaml to bring things back to normal.

Demo3: Runtime Security based on MITRE matrix:

  1. Privilege Escalation: Launch Privileged Container -- use kubectl apply -f privilegedContainer.yaml which will trigger the policy
  2. Execution: Run a terminal shell in container -- use kubectl exec -it nginx-privileged bash
  3. Discovery: Launch Suspicious Network Tool in Container -- first create a bash history file because it's not there usually and run an nmap scan -- use touch ~/.bash_history && nmap 10.35.244.69 -Pn -p 50051
  4. Credential Access: Search Private Keys or Passwords -- use grep -ri -e "BEGIN RSA PRIVATE" /app
  5. Exfiltration: Interpreted procs outbound network activity -- in the nmap container run cp /app/key/throwAway.pem my_file.txt && python /app/connect.py this will trigger the policy
  6. Defense Evasion: Delete Bash History -- run shred -f ~/.bash_history
  7. Check the capture file generated based on the Exfiltration event. (Sysdig Inspect Sometimes it shows that Unable to load data that's okay you can still follow the below instructions) a. Click Sysdig Secure Notifications, Executed Commands, New SSH Connections, and Accessed Files and zoom around New SSH Connections and talk about how it happened right before the trigger. b. Drill into New SSH Connections and show the outbound connection to a public IP. c. Go back and drill into Accessed Files. d. Search using Find Text for the throwAway file and drill into I/O stream. e. Show how that file contains a private key f. Go back to Accessed Files and search for connect.py g. Drill into I/O stream. h. Talk about the scp operation that exfiltrated the private key to a hacker's machine
  8. Talk about how we could have looked for private key files during the scanning process. Also talk about the kill container action that we could have taken in the very beginning.

Destroy the cluster using the destroy-cluster.sh script

Additional Notes:

  • You can do kubectl run using kubectl run -i --tty nmap --image=samgabrail/networktools -- bash
  • There are three files in this folder called ScanningRule.json, CustomFalcoRules.yaml, and MITRE_SysdigSecure_Policies.json that are needed to be deployed via the sdc-cli or the API to Sysdig Secure.
  • The samgabrail/networktools container that is used in this demo is found on dockerhub at https://hub.docker.com/r/samgabrail/networktools and the github repo is samgabrail/falco at https://github.com/samgabrail/falco

The Notes below are the default notes for the Hipster App taken from their github repo at https://github.com/GoogleCloudPlatform/microservices-demo

Project Notes:

This project contains a 10-tier microservices application. The application is a web-based e-commerce app called “Hipster Shop” where users can browse items, add them to the cart, and purchase them.

Google uses this application to demonstrate use of technologies like Kubernetes/GKE, Istio, Stackdriver, gRPC and OpenCensus. This application works on any Kubernetes cluster (such as a local one), as well as Google Kubernetes Engine. It’s easy to deploy with little to no configuration.

If you’re using this demo, please ★Star this repository to show your interest!

👓Note to Googlers: Please fill out the form at go/microservices-demo if you are using this application.

Screenshots

Home Page Checkout Screen
Screenshot of store homepage Screenshot of checkout screen

Service Architecture

Hipster Shop is composed of many microservices written in different languages that talk to each other over gRPC.

Architecture of microservices

Find Protocol Buffers Descriptions at the ./pb directory.

Service Language Description
frontend Go Exposes an HTTP server to serve the website. Does not require signup/login and generates session IDs for all users automatically.
cartservice C# Stores the items in the user's shipping cart in Redis and retrieves it.
productcatalogservice Go Provides the list of products from a JSON file and ability to search products and get individual products.
currencyservice Node.js Converts one money amount to another currency. Uses real values fetched from European Central Bank. It's the highest QPS service.
paymentservice Node.js Charges the given credit card info (mock) with the given amount and returns a transaction ID.
shippingservice Go Gives shipping cost estimates based on the shopping cart. Ships items to the given address (mock)
emailservice Python Sends users an order confirmation email (mock).
checkoutservice Go Retrieves user cart, prepares order and orchestrates the payment, shipping and the email notification.
recommendationservice Python Recommends other products based on what's given in the cart.
adservice Java Provides text ads based on given context words.
loadgenerator Python/Locust Continuously sends requests imitating realistic user shopping flows to the frontend.

Features

  • Kubernetes/GKE: The app is designed to run on Kubernetes (both locally on "Docker for Desktop", as well as on the cloud with GKE).
  • gRPC: Microservices use a high volume of gRPC calls to communicate to each other.
  • Istio: Application works on Istio service mesh.
  • OpenCensus Tracing: Most services are instrumented using OpenCensus trace interceptors for gRPC/HTTP.
  • Stackdriver APM: Many services are instrumented with Profiling, Tracing and Debugging. In addition to these, using Istio enables features like Request/Response Metrics and Context Graph out of the box. When it is running out of Google Cloud, this code path remains inactive.
  • Skaffold: Application is deployed to Kubernetes with a single command using Skaffold.
  • Synthetic Load Generation: The application demo comes with a background job that creates realistic usage patterns on the website using Locust load generator.

Installation

We offer three installation methods:

  1. Running locally with “Docker for Desktop” (~20 minutes) You will build and deploy microservices images to a single-node Kubernetes cluster running on your development machine.

  2. Running on Google Kubernetes Engine (GKE)” (~30 minutes) You will build, upload and deploy the container images to a Kubernetes cluster on Google Cloud.

  3. Using pre-built container images: (~10 minutes, you will still need to follow one of the steps above up until skaffold run command). With this option, you will use pre-built container images that are available publicly, instead of building them yourself, which takes a long time).

Option 1: Running locally with “Docker for Desktop”

💡 Recommended if you're planning to develop the application or giving it a try on your local cluster.

  1. Install tools to run a Kubernetes cluster locally:

    • kubectl (can be installed via gcloud components install kubectl)
    • Docker for Desktop (Mac/Windows): It provides Kubernetes support as noted here.
    • skaffold (ensure version ≥v0.20)
  2. Launch “Docker for Desktop”. Go to Preferences:

    • choose “Enable Kubernetes”,
    • set CPUs to at least 3, and Memory to at least 6.0 GiB
    • on the "Disk" tab, set at least 32 GB disk space
  3. Run kubectl get nodes to verify you're connected to “Kubernetes on Docker”.

  4. Run skaffold run (first time will be slow, it can take ~20 minutes). This will build and deploy the application. If you need to rebuild the images automatically as you refactor the code, run skaffold dev command.

  5. Run kubectl get pods to verify the Pods are ready and running. The application frontend should be available at http://localhost:80 on your machine.

Option 2: Running on Google Kubernetes Engine (GKE)

💡 Recommended if you're using Google Cloud Platform and want to try it on a realistic cluster.

  1. Install tools specified in the previous section (Docker, kubectl, skaffold)

  2. Create a Google Kubernetes Engine cluster and make sure kubectl is pointing to the cluster.

    gcloud services enable container.googleapis.com
    gcloud container clusters create demo --enable-autoupgrade \
        --enable-autoscaling --min-nodes=3 --max-nodes=10 --num-nodes=5 --zone=us-central1-a
    kubectl get nodes
    
  3. Enable Google Container Registry (GCR) on your GCP project and configure the docker CLI to authenticate to GCR:

    gcloud services enable containerregistry.googleapis.com
    gcloud auth configure-docker -q
  4. In the root of this repository, run skaffold run --default-repo=gcr.io/[PROJECT_ID], where [PROJECT_ID] is your GCP project ID.

    This command:

    • builds the container images
    • pushes them to GCR
    • applies the ./kubernetes-manifests deploying the application to Kubernetes.

    Troubleshooting: If you get "No space left on device" error on Google Cloud Shell, you can build the images on Google Cloud Build: Enable the Cloud Build API, then run skaffold run -p gcb --default-repo=gcr.io/[PROJECT_ID] instead.

  5. Find the IP address of your application, then visit the application on your browser to confirm installation.

    kubectl get service frontend-external
    

    Troubleshooting: A Kubernetes bug (will be fixed in 1.12) combined with a Skaffold bug causes load balancer to not to work even after getting an IP address. If you are seeing this, run kubectl get service frontend-external -o=yaml | kubectl apply -f- to trigger load balancer reconfiguration.

Option 3: Using Pre-Built Container Images

💡 Recommended if you want to deploy the app faster in fewer steps to an existing cluster.

NOTE: If you need to create a Kubernetes cluster locally or on the cloud, follow "Option 1" or "Option 2" until you reach the skaffold run step.

This option offers you pre-built public container images that are easy to deploy by deploying the release manifest directly to an existing cluster.

Prerequisite: a running Kubernetes cluster (either local or on cloud).

  1. Clone this repository, and go to the repository directory

  2. Run kubectl apply -f ./release/kubernetes-manifests.yaml to deploy the app.

  3. Run kubectl get pods to see pods are in a Ready state.

  4. Find the IP address of your application, then visit the application on your browser to confirm installation.

    kubectl get service/frontend-external

(Optional) Deploying on a Istio-installed GKE cluster

Note: you followed GKE deployment steps above, run skaffold delete first to delete what's deployed.

  1. Create a GKE cluster (described in "Option 2").

  2. Use Istio on GKE add-on to install Istio to your existing GKE cluster.

    gcloud beta container clusters update demo \
        --zone=us-central1-a \
        --update-addons=Istio=ENABLED \
        --istio-config=auth=MTLS_PERMISSIVE

    NOTE: If you need to enable MTLS_STRICT mode, you will need to update several manifest files:

    • kubernetes-manifests/frontend.yaml: delete "livenessProbe" and "readinessProbe" fields.
    • kubernetes-manifests/loadgenerator.yaml: delete "initContainers" field.
  3. (Optional) Enable Stackdriver Tracing/Logging with Istio Stackdriver Adapter by following this guide.

  4. Install the automatic sidecar injection (annotate the default namespace with the label):

    kubectl label namespace default istio-injection=enabled
  5. Apply the manifests in ./istio-manifests directory. (This is required only once.)

    kubectl apply -f ./istio-manifests
  6. Deploy the application with skaffold run --default-repo=gcr.io/[PROJECT_ID].

  7. Run kubectl get pods to see pods are in a healthy and ready state.

  8. Find the IP address of your Istio gateway Ingress or Service, and visit the application.

    INGRESS_HOST="$(kubectl -n istio-system get service istio-ingressgateway \
       -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"
    echo "$INGRESS_HOST"
    curl -v "http://$INGRESS_HOST"

Cleanup

If you've deployed the application with skaffold run command, you can run skaffold delete to clean up the deployed resources.

If you've deployed the application with kubectl apply -f [...], you can run kubectl delete -f [...] with the same argument to clean up the deployed resources.

Conferences featuring Hipster Shop


This is not an official Google project.