/unifiedpush-operator

:wheel_of_dharma: Kubernetes operator for the AeroGear UnifiedPush Server

Primary LanguageGoApache License 2.0Apache-2.0

UnifiedPush Operator

Overview

The UnifiedPush Operator for Kubernetes provides an easy way to install and manage an AeroGear UnifiedPush Server on OpenShift.

Prerequisites

Install Go

Ensure the $GOPATH environment variable is set

Install the dep package manager

Install Operator-SDK

Install kubectl

Getting Started

Cloning the repository

By the following commands you will create a local directory and clone this project.

$ git clone git@github.com:aerogear/unifiedpush-operator.git $GOPATH/src/github.com/aerogear/unifiedpush-operator

Minishift installation and setup

Install Minishift then install Operators on it by running the following commands.

# create a new profile to test the operator
$ minishift profile set unifiedpush-operator

# enable the admin-user add-on
$ minishift addon enable admin-user

# add insecure registry to download the images from docker
$ minishift config set insecure-registry 172.30.0.0/16

# start the instance
$ minishift start
ℹ️
The above steps are not required in OCP > 4 since the OLM and Operators came installed by default.

Installing

Use the following command to install the UnifiedPush Operator and Service in your OpenShift cluster as follows:

$ make install
It will install an example configuration setup for your Push Server. To know how to configure it see UnifiedPushServer Options
ℹ️
To install you need be logged in as a user with cluster privileges like the system:admin user. E.g. By using: oc login -u system:admin.

Creating PushApplication

  • Create a PushApplication CR as this example.

    The app name and description need to be specified into the PushApplication CR as follows.

    apiVersion: push.aerogear.org/v1alpha1
    kind: PushApplication
    metadata:
      name: example-pushapplication
    spec:
      description: 'An example push application to demonstrate the
        unifiedpush-operator'
  • Run the following command to create the PushApplication into the Service

    $ make example-pushapplication/apply
    ℹ️
    You can delete it by running make example-pushapplication/delete

Creating an AndroidVariant for your App

After creating the PushApplication above, you should be able to get the pushApplicationId from the status, this will be needed to be able to create Variants:

kubectl get pushApplication example-pushapplication -n unifiedpush-apps -o jsonpath='{.status.pushApplicationId}'

Here are all of the configurable fields in an AndroidVariant:

Field Name Description

pushApplicationId

ID of the PushApplication that this variant corresponds to

description

Human friendly description for the variant

senderId

The "Google Project Number from the API Console

serverKey

The key from the Firebase Console of a project which has been enabled for FCM

kubectl apply -n unifiedpush-apps -f ./deploy/crds/examples/push_v1alpha1_androidvariant_cr.yaml

Creating an IOSVariant for your App

After creating the PushApplication above, you should be able to get the pushApplicationId from the status, this will be needed to be able to create Variants:

kubectl get PushApplication example-pushapplication -n unifiedpush-apps -o jsonpath='{.status.pushApplicationId}'

Here are all of the configurable fields in an IOSVariant:

Field Name Description

pushApplicationId

ID of the PushApplication that this variant corresponds to

description

Human friendly description for the variant

certificate

The base64 encoded APNs certificate that is needed to establish a connection to Apple’s APNs Push Servers

passphrase

The APNs passphrase that is needed to establish a connection to Apple’s APNs Push Servers

production

If true, indicates that a connection to production APNs server should be used. If false a connection to the Sandbox/Development APNs server will be used.

  • Apply an AndroidVariantCR based on the example a IOSVariant CR as follows:

    kubectl apply -n unifiedpush-apps -f ./deploy/crds/examples/push_v1alpha1_iosvariant_cr.yaml

Uninstalling

Use the following command to delete all related configuration applied by the make install of this project.

$ make cluster/clean
ℹ️
To uninstall you need be logged in as a user with cluster privileges like the system:admin user. E.g. By using: oc login -u system:admin.

Configuration

UnifiedPushServer Options

This is the main installation resource kind. Creation of a valid UnifiedPushServer CR will result in a functional AeroGear UnifiedPushServer deployed to your namespace.

ℹ️

This operator currently only supports one UnifiedPushServer CR to be created.

Here are all of the configurable fields in a UnifiedPushServer:

UnifiedPushServer fields
Field Name Description

backups

A list of backup entries that CronJobs will be created from. See ./deploy/crds/push_v1alpha1_unifiedpushserver_cr_with_backup.yaml for an annotated example. Note that a ServiceAccount called "backupjob" must already exist before the operator will create any backup CronJobs. See https://github.com/integr8ly/backup-container-image/tree/master/templates/openshift/rbac for an example.

The most basic UnifiedPushServer CR doesn’t specify anything in the Spec section, so the example in ./deploy/crds/push_v1alpha1_unifiedpushserver_cr.yaml is a good template:

push_v1alpha1_unifiedpushserver_cr.yaml
apiVersion: push.aerogear.org/v1alpha1
kind: UnifiedPushServer
metadata:
  name: example-unifiedpushserver

To create this, you can run:

kubectl apply -n unifiedpush -f ./deploy/crds/push_v1alpha1_unifiedpushserver_cr.yaml

To see the created instance then, you can run:

kubectl get ups example-unifiedpushserver -n unifiedpush -o yaml

Image Streams

The operator uses 3 image streams and what image streams to use are configurable with environment variables.

Unified Push Server and Oauth proxy image stream are created within the same namespace by the operator. However, for Postgres the image stream in openshift namespace is used.

The following table shows the available environment variable names, along with their default values:

Environment Variables
Name Default Purpose

UPS_IMAGE_STREAM_NAME

ups-imagestream

Name of the Unified Push image stream that will be created by the operator.

UPS_IMAGE_STREAM_TAG

latest

Tag of the Unified Push image stream that will be created by the operator.

UPS_IMAGE_STREAM_INITIAL_IMAGE

docker.io/aerogear/unifiedpush-wildfly-plain:2.2.1.Final

Initial image for the Unified Push image stream that will be created by the operator.

OAUTH_PROXY_IMAGE_STREAM_NAME

ups-oauth-proxy-imagestream

Name of the Oauth proxy image stream that will be created by the operator.

OAUTH_PROXY_IMAGE_STREAM_TAG

latest

Tag of the Oauth proxy image stream that will be created by the operator.

OAUTH_PROXY_IMAGE_STREAM_INITIAL_IMAGE

docker.io/openshift/oauth-proxy:v1.1.0

Initial image for the Oauth proxy image stream that will be created by the operator.

POSTGRES_IMAGE_STREAM_NAMESPACE

openshift

Namespace to look for the Postgres image stream.

POSTGRES_IMAGE_STREAM_NAME

postgresql

Name of the Postgres image stream to look for.

OAUTH_PROXY_IMAGE_STREAM_TAG

10

Tag of the Postgres image stream.

🔥
Re-deploying this operator with customized images will cause all instances owned by the operator to be updated.

Container Names

If you would like to modify the container names, you can use the following environment variables.

Environment Variables
Name Default

UPS_CONTAINER_NAME

ups

OAUTH_PROXY_CONTAINER_NAME

ups-oauth-proxy

POSTGRES_CONTAINER_NAME

postgresql

Backups

The BACKUP_IMAGE environment variable configures what image to use for backing up the custom resources created by this operator. Default value is quay.io/integreatly/backup-container:1.0.8.

Monitoring Service (Metrics)

The application-monitoring stack provisioned by the application-monitoring-operator on Integr8ly can be used to gather metrics from this operator and the UnifiedPush Server. These metrics can be used by Integr8ly’s application monitoring to generate Prometheus metrics, AlertManager alerts and a Grafana dashboard.

It is required that the integr8ly/Grafana and Prometheus operators are installed. For further detail see integr8ly/application-monitoring-operator.

The following command enables the monitoring service in the operator namespace:

make monitoring/install
The namespaces are setup manually in the files ServiceMonitor, Prometheus Rules, Operator Service, and Grafana Dashboard. Following an example from the Prometheus Rules. You should replace them if the operator is not installed in the default namespace.
  expr: |
          (1-absent(kube_pod_status_ready{condition="true", namespace="mobile-security-service"})) or sum(kube_pod_status_ready{condition="true", namespace="mobile-security-service"}) != 3

[source,shell]
ℹ️
The command make monitoring/uninstall will uninstall the Monitor Service.

Development

Running the operator

  1. Prepare the operator project:

make cluster/prepare
  1. Run the operator (locally, not in OpenShift):

make code/run
  1. Create a UPS instance (in another terminal):

kubectl apply -f deploy/crds/push_v1alpha1_unifiedpushserver_cr.yaml -n unifiedpush
  1. Watch the status of your UPS instance provisioning (optional):

watch -n1 "kubectl get po -n unifiedpush && echo '' && kubectl get ups -o yaml -n unifiedpush"
  1. If you want to be able to work with resources that require the local instance of your operator to be able to talk to the UPS instance in the cluster, then you’ll need to make a corresponding domain name available locally. Something like the following should work, by adding an entry to /etc/hosts for the example Service that’s created, then forwarding the port from the relevant Pod in the cluster to the local machine. Run this in a separate terminal, and ctrl+c to clean it up when finished:

# su/sudo is needed to be able to:
# - modify /etc/hosts
# - bind to port :80
KUBECONFIG=$HOME/.kube/config su -c "echo '127.0.0.1   example-unifiedpushserver-unifiedpush' >> /etc/hosts && kubectl port-forward $(kubectl get po -l service=ups -o name) 80:8080 && sed -i -e 's/^127.0.0.1   example-unifiedpushserver-unifiedpush$//g' -e '/^[[:space:]]*$/d' /etc/hosts"
  1. When finished, clean up:

make cluster/clean

Testing

Run unit tests

make test/unit

Run e2e tests

  1. Export env vars used in commands below

export NAMESPACE="<name-of-your-openshift-project-used-for-testing>"
export IMAGE="quay.io/<your-account-name>/unifiedpush-operator"
  1. Login to OpenShift cluster as a user with cluster-admin role

oc login <url> --token <token>
  1. Prepare a new OpenShift project for testing

make NAMESPACE=$NAMESPACE cluster/prepare
  1. Modify the operator image name in manifest file

yq w -i deploy/operator.yaml spec.template.spec.containers[0].image $IMAGE

Note: If you do not have yq installed, just simply edit the image name in deploy/operator.yaml

  1. Build & push the operator container image to your Dockerhub/Quay image repository, e.g.

operator-sdk build $IMAGE --enable-tests && docker push $IMAGE
  1. Run the test

operator-sdk test cluster $IMAGE --namespace $NAMESPACE --service-account unifiedpush-operator

Publishing images

Images are automatically built and pushed to our image repository by the Jenkins in the following cases:

  • For every change merged to master a new image with the master tag is published.

  • For every change merged that has a git tag a new image with the <operator-version> and latest tags are published.

Tags Release

Following the steps

  1. Create a new version tag following the semver, for example 0.1.0

  2. Bump the version in the version.go file.

  3. Update the the CHANGELOG.MD with the new release.

  4. Update any tag references in all SOP files (e.g https://github.com/aerogear/unifiedpush-operator/blob/0.1.0/SOP/SOP-operator.adoc)

  5. Create a git tag with the version value, for example:

    $ git tag -a 0.1.0 -m "version 0.1.0"
  6. Push the new tag to the upstream repository, this will trigger an automated release by the Jenkins, for example:

    $ git push upstream 0.1.0
    ℹ️
    The image with the tag will be created and pushed to the unifiedpush-operator image hosting repository by the Jenkins.

Architecture

This operator is cluster-scoped. For further information see the Operator Scope section in the Operator Framework documentation. Also, check its roles in Deploy directory.

ℹ️
The operator, application and database will be installed in the namespace which will be created by this project.

CI/CD

CircleCI

  • Coveralls

  • Unit Tests

ℹ️
See the config.yml.

Jenkins

  • Integration Tests

  • Build of images

ℹ️
See the Jenkinsfile.

Makefile command reference

Application

Command

Description

make install

Creates the {namespace} namespace, application CRDS, cluster role and service account.

make cluster/clean

It will delete what was performed in the make cluster/prepare .

make monitoring/install

Installs Monitoring Service in order to provide metrics

make monitoring/uninstall

Uninstalls Monitoring Service in order to provide metrics, i.e. all configuration applied by make monitoring/install

make example-pushapplication/apply

Applies the Example PushApplication CR `

make example-pushapplication/delete

Delete the Example PushApplication CR `

make cluster/prepare

It will apply all less the operator.yaml.

Local Development

make code/run

Runs the operator locally for development purposes.

make code/gen

Sets up environment for debugging proposes.

make code/vet

Examines source code and reports suspicious constructs using vet.

make code/fix

Formats code using gofmt.

Jenkins

make test/compile

Compile image to be used in the e2e tests

make code/compile

Compile image to be used by Jenkins

Tests / CI

make test/integration-cover

It will run the coveralls.

make test/unit

Runs unit tests

make code/build/linux

Build image with the parameters required for CircleCI

ℹ️
The Makefile is implemented with tasks which you should use to work with.

Supportability

This operator was developed using the Kubernetes and Openshift APIs.

Currently this project requires the usage of the v1.Route to expose the service and OAuth-proxy for authentication which make it unsupportable for Kubernetes. Also, it is using ImageStream which is from the OpenShift API specifically. In this way, this project is not compatible with Kubernetes, however, in future we aim to make it work on vanilla Kubernetes also.

Security Response

If you’ve found a security issue that you’d like to disclose confidentially please contact the Red Hat Product Security team.

The UnifiedPush Operator is licensed under the Apache License, Version 2.0 License, and is subject to the AeroGear Export Policy.

Contributing

All contributions are hugely appreciated. Please see our Contributing Guide for guidelines on how to open issues and pull requests. Please check out our Code of Conduct too.

Questions

There are a number of ways you can get in in touch with us, please see the AeroGear community.