/comparison-demo

A demo comparing different integration and deployment techniques with OpenShift

Primary LanguageShell

Comparison Demo: AWS and Openshift

Introduction

This repo contains all the scripts and code (as submodules) needed to create a demo that supports this walkthrough that compares and contrasts what it takes to deploy a Tomcat based website in AWS Elastic Beanstalk vs. deploying the same application in Kubernetes/OpenShift. It is targeted to AWS developers/users who are looking to understand the Kubernetes through the lens of AWSs (non-container based) offerings. Concepts covered include:

AWS OpenShift

Code Pipeline

OpenShift Pipelines (Tekton)

CodeCommit, CodeBuild

Gogs, Source2Image and Templates (Tomcat8)

RDS

Operators (MySql, Pipelines)

Elastic Beanstalk

OpenShift Developer Perspective

Tip
There is a .devcontainer in this repo so if you have VisualStudio Code with remote extensions enabled, you can open repo folder in a container that has all the tools necessary to run this demo already installed.

Original presentation (Red Hat internal only access for now) is here

The heart of the presentation is using the following diagrams to map AWS components of a website to OpenShift/Kubernetes ones:

AWS (Elastic Beanstalk) Architecture OpenShift Architecture

AWS

Kube

Handout

Handout

This demo is still a work in progress. See here for a list of items that are as of yet unfinished

Warning

Make sure you run the following command before executing any of these commands listed in this demo.

source scripts/shell-setup.sh

OpenShift Setup

MySQL Setup

First install the operator.

Operator Install

Helm

See information here for the operator (which can be found here). Even better instructions are here from "banzaicloud"

Then install an agent in the project where you want to install the cluster.

OpenShift Operator

There is no official OpenShift operator for MySQL. Instead, this is the Oracle MySQL helm based operator repackaged so that it can be governed by the Operator Lifecycle Manager. See the sql-operator branch for more info.

Warning
FIXME: This step should not be necessary. This is a problem with how the operator was created

Create the mysql-operator project. This is where the instance of the mysql operator container will live. This should really be in openshift-operators, but it seems the operator doesn’t install the operator CRD instance by default

oc new-project mysql-operator
  1. Install the custom catalog source

    Note
    This assumes the operator csv application is already setup in quay.io.
    oc apply -f $DEMO_HOME/kube/operators/mysql-olm-operator/operator-source.yaml
    1. The operator should then be available in OperatorHub listing under the "custom" provider.

      Note
      If you don’t see it there, take a look at the logs of the marketplace pod of the openshift-marketplace project
  2. Create an instance of the operator

    operator instance

  3. Once created, it should install all the necessary CRDs for setting up a cluster

The rest of the instructions for creating a MySQL instance once the operator is installed is in the Walkthrough.

OpenShift Pipelines (Tekton)

Pipeline setup is inspired by this post here

From the instructions here

Pipelines Operator Install

  1. Install subscription (in openshift operators)

    oc apply -f $DEMO_HOME/kube/tekton/tekton-subscription.yaml
  2. Optionally install tekton dashboard (for visualization) as per here

    oc apply -f $DEMO_HOME/kube/tekton/openshift-tekton-dashboard-release.yaml
    oc wait --for=condition=Available deployment/tekton-dashboard -n openshift-pipelines
  3. Then you can open the dashboard by hitting this URL. It will authenticate using OpenShift oauth

    echo "https://$(oc get route tekton-dashboard -o jsonpath='{.spec.host}' -n openshift-pipelines)/"
  4. When the operator has finished installing, it will install a pipeline service account in all projects that have sufficient permissions to build stuff. There is also a centralized openshift-pipelines project that holds pipeline supporting pods.

    Note
    See also tips and tricks from the pipelines tutorial

AWS Setup

The AWS setup at this point is quite manual for all aspects except elasticbeanstalk. See also Still to Come section for more info.

If you are running this demo in a container, you will notice that local .aws directory is mounted in the container. This is so that the aws cli that is installed on the devcontainer is able to use your host credentials.

Elastic Beanstalk Setup

From helloworld-pipeline example here

Run the following script and answer the prompts as per the link above

Note
When using eb deploy it appears you must first commit (but not push) into the (local) git repo that is referenced in the .git of the folder that you eb initialized in. The branch it looks to is in .elasticbeanstalk/config.yml.

Connecting to existing EB Environment with CLI

Note
See information on this page here.
Warning
You need to have the eb cli installed for this to work. The devcontainer for the compatibility project does have these scripts installed so you should be able to run them from there.
  1. Change to the directory with the git repo that is deployed to eb

  2. run eb init --profile $AWS_PROFILE

  3. Answer the prompts selecting the appropriate region that you want to use

Creating a new EB application

Note
Each environment (Dev and Staging) can take 15-20 minutes to create
  1. Navigate to the following directory under the compatibility demo:

    cd $DEMO_HOME/spring-framework-petclinic/
  2. Follow the instructions for Connecting to existing EB Environment with CLI but select a new application. Ensure that you choose the proper region. I shouldn’t matter what type of application you create.

    Next, run the following script to create a dev environment (use the -n flag if you’d like an application name other than PetClinic)

    FIXME: There may be some parts of the configuration file $DEMO_HOME/aws/elasticbeanstalk/petclinic-dev.cfg.yml that are still specific to my installation (e.g. bastion ssh key) that still need to be parameterized in the script

    $DEMO_HOME/workspaces/comparison-demo/scripts/aws-create-beanstalk.sh

CodeCommit Setup

First, be sure to setup your access following instructions here. Roughly these steps include:

  1. Add CodeCommitPowerUser policy to your IAM user

  2. generate a code commit keypair (see Step 3 of this document for more details on creating a keypair)

  3. Create a CodeCommit repo

  4. Copy the SSH key

While this setup is not yet automated, you can use the AWS cli with (end-user edited) configuration files as in this directory.

CodeBuild Setup

To support the AWS Code Pipeline you need to set up a couple different builds. Many of these will be governed by buildspec*.yml files in the spring-framework-petclinic submodule. At the time of writing this these are:

  • buildspec-sonarqube.yml

  • buildspec-unittest.yml

  • buildspec.yml

While this is not yet automated, you can use the AWS cli with (end-user edited) configuration files as in this directory.

AWS Pipeline Setup

Before you setup the pipeline, you need to first setup a CodeCommit repo as well as setup all the target Elastic Beanstalk environments above. Much of the instructions here are inspired from the helloworld-pipeline example here

  1. Create the code commit repo

  2. Create and name the pipeline

    • be sure to create a servicerole

Then select your source (GitHub) in this case

Demo Troubleshooting

MySQL

You can test access to a MySQL database in an OpenShift cluster using the Adminer image.

  1. First, setup port forwarding to the service in question (assuming a petclinic based service as shown in the walkthrough)

    oc port-forward svc/petclinic-mysql 3306:3306
  2. Then, in another shell, run the Adminer image and have it port forward to 8080. NOTE: Assumes you are running on a Mac using Docker for Mac, this is where the docker.for.mac.localhost stuff comes from

    docker run -p 8080:8080 -e ADMINER_DEFAULT_SERVER=docker.for.mac.localhost adminer:latest
  3. From the Adminer web page, login as root (using whatever secret was used in the setup of the cluster). You can then run arbitrary commands. Here are the commands you can run to grant access to a user pc to a newly created petclinic database (from here)

    CREATE USER 'pc'@'%' IDENTIFIED BY 'petclinic';
    CREATE DATABASE petclinic;
    GRANT ALL PRIVILEGES ON petclinic.* TO 'pc'@'%';
    1. Or instead, you run SQL commands from the local command line

      oc run mysql-client --image=mysql:5.7 --restart=Never --rm=true --attach=true --wait=true \
          -- mysql -h petclinic-mysql -uroot -ppetclinic -e "CREATE USER 'pc'@'%' IDENTIFIED BY 'petclinic'; \
            CREATE DATABASE petclinic; \
            GRANT ALL PRIVILEGES ON petclinic.* TO 'pc'@'%';"
Cluster fails to appear

Check the logs of the mysql-operator pod in the mysql-operator project

oc logs deployment/mysql-operator -n mysql-operator -f

One known issue is that you can’t make your clustername too long:

E0317 07:08:48.666574       1 controller.go:291] error syncing 'test-petclinic4/mysql-cluster-with-3-replicas': validating Cluster: metadata.name: Invalid value: "mysql-cluster-with-3-replicas": longer than maximum supported length 28 (see: https://bugs.mysql.com/bug.php?id=90601)
Configuration Error on Pod

Check to make sure you’ve actually created the secret configured for the cluster

Troubleshooting (Local) Tomcat Server

If the tomcat extension fails to run, you can attempt the following:

  1. remote any old tomcat files

    rm -f /opt/webserver/webse*
  2. Attempt to readd tomcat to /opt/webserver per the instructions above

  3. if that still doesn’t work, rebuild container.

  4. If all else fails, you can run the tomcat server locally.

CI/CD Project Install

Note
The script referenced is run during the Walkthrough. This just gives a little more info if needed for troubleshooting

A good example on how to get this running with SpringBoot is here. This demo is heavily based on it. this is also a good article about a number of Tekton features used in this demo (such as workspaces) including some others that aren’t yet being used (e.g. conditions)

Run the following script to setup the entire cicd project (it will create a project called <PROJECT_PREFIX>-cicd (where <PROJECT_PREFIX> is the value passed to --project-prefix in the command below) if it doesn’t exist already to install all the artifacts into.

$DEMO_HOME/scripts/create-cicd.sh install --project-prefix petclinic --user USER --password <PASSWORD>

The <USER> and <PASSWORD> that is passed in is the user and password needed to create a pull secret for registry.redhat.io. This is needed for the s2i images. It will basically associate this secret with the pipelines service account. NOTE: you can use a redhat registry server account name and password instead of your own login and password

Warning
This must be run after the corresponding development environment (e.g. petclinic-dev) has been created or the script will fail. This is due to the cicd pipeline needing to update the permissions of the pipeline service account to be able to "see into and change" (e.g. edit) the dev project

OpenShift Nexus Installation

The $DEMO_HOME/scripts/create-cicd.sh will create a Nexus instance within the petclinic-cicd project and will configure the repo accordingly so that the application can be built appropriately. Should something go wrong, this section outlines steps that the script should have undertaken so that you can troubleshoot.

nexus maven public

The original petclinic app uses some repos outside of maven central. Namely:

Here’s how you would manually configure these in Nexus:

  1. Connect to the nexus instance (see route)

    echo "http://$(oc get route nexus -n petclinic-cicd -o jsonpath='{.spec.host}')/"
  2. Log into the nexus instance (standard nexus setup has admin, admin123)

  3. Go to Repositories and Create Repository for each of the repos needed

    nexus repositories

    1. Here’s example configuration for each of the above

      Spring Red Hat

  4. Add the two registries to the maven-public group as per the screenshot

    FIXME: This is necessary until every build gets a semantic version number update

  5. Update the maven-releases repo to allow updates like below:

    nexus repo allow redeploy

OpenShift Pipeline (Git) Triggers

Tekton allows for EventListeners, TriggerTemplates, and TriggerBindings to allow a git repo to hit a webhook and trigger a build. See also here. To get basic triggers going for both gogs and github run the following:

Note
For an example of triggers working with Tekton, see files in the template directory of this repo
Note
You may also want to consider this tekton dashboard functionality

YAML resources for the pipeline created for this demo can be found in these locations:

  1. Resources: $DEMO_HOME/kube/tekton/resources

  2. Triggers: $DEMO_HOME/kube/tekton/triggers

Triggered Pipeline Fails to Run

If the trigger doesn’t appear to fire, then check the logs of the pod that is running that represents the webhook. The probably is likely in the PipelineRun template.

Viewing (Extended) OpenShift Pipeline (Tekton) Logs

You can see limited logs in the Tekton UI, but if you want the full logs, you can access these from the command line using the tkn command

# Get the list of pipelineruns in the current project
tkn pipelinerun list

# Output the full logs of the named pipeline run (where petclinic-deploy-dev-run-j7ktj is a pipeline run name )
tkn pipelinerun logs petclinic-deploy-dev-run-j7ktj

To output the logs of a currently running pipelinerun (pr) and follow them, use:

tkn pr logs -L -f

Troubleshooting ElasticBeanstalk

ssh Connection to Instances

You can connect to an instance to debug and troubleshoot it by running this command (where PetClinic-dev is the name of the environment that you’ve created)

Note
You must have whatever key you referenced in the petclinic config available to the environment (see .devcontainer, it’s assumed the key is available in the host’s .ssh directory)
eb ssh PetClinic-dev

You can find the tomcat logs in the following directory:

cd /usr/share/tomcat8/logs
tail -f catalina.out

You can restart the webserver with

cd /usr/share/tomcat8
# clear out the old logs
sudo rm logs/catalina.out
sudo service tomcat8 restart

Saving an ElasticBeanstalk configuration with the CLI

As long as you have linked an EB application to the CLI you can download a configuration using the EB CLI eb config command, as shown in the following example. NAME is the name of your saved configuration.

$ eb config get PetClinic3

Configuration saved at: /workspaces/comparison-demo/spring-framework-petclinic/.elasticbeanstalk/saved_configs/PetClinic3.cfg.yml

To get a list of configurations, run

eb config list

Updating an ElasticBeanstalk configuration with the CLI

Once you have downloaded a configuration you can edit that configuration locally and then update it.

  1. First open your configuration file that you downloaded previously (e.g. .elasticbeanstalk/saved_configs/PetClinic3.cfg.yml)

    • for example, change the description of the configuration

  2. Run eb config put .elasticbeanstalk/saved_configs/PetClinic.cfg.yml

  3. To apply the configuration, open the console (eb console or eb console -debug when in a container to get the command to print out the EB console URL)

Note
For more information on the create command see here

Troubleshooting RDS Databases

Access databases by using SSH to the RDS instance (or to any instance that has access to the RDS instance you want to access) and forwarding port 3306 locally through to the database server. mwh-bastion.pem is the ssh cert you used to setup the EB environment

 ssh -i "mwh-bastion.pem" ec2-user@ec2-13-210-144-99.ap-southeast-2.compute.amazonaws.com -L 3306:aa3s38alasq0v1.cjpxzung9qid.ap-southeast-2.rds.amazonaws.com:3306
Note

It may also be possible to use eb ssh to accomplish this using the eb cli

eb ssh -c "-L 3306:aa5pwvn1s0ap5r.cjpxzung9qid.ap-southeast-2.rds.amazonaws.com:3306"

except that running in a container it appears binding on the port doesn’t work properly

Then use the Adminer pod to debug

docker run -p 8080:8080 -e ADMINER_DEFAULT_SERVER=docker.for.mac.localhost adminer:latest

Appendix

Still To Come

  1. Programmatic creation of AWS Components (currently only Elastic Beanstalk supported). See .json files here

  2. Update OpenShift Pipeline UnitTest to use surefire:test goal for unit test (and allow viewing of report)

  3. OpenShift pipeline to update version number of every build

  4. Add a TaskRun that would support cleaning the maven build and/or deleting all build and package artifacts in the maven workspace

  5. Update the OLM MySQL Operator to have a custom icon and reference relevant CRDs (like Cluster)