This is the home of Dynatrace OneAgent Operator which supports the rollout and lifecycle of Dynatrace OneAgent in Kubernetes and OpenShift clusters. Rolling out Dynatrace OneAgent via DaemonSet on a cluster is straightforward. Maintaining its lifecycle places a burden on the operational team. Dynatrace OneAgent Operator closes this gap by automating the repetitive steps involved in keeping Dynatrace OneAgent at its latest desired version.
Dynatrace OneAgent Operator is based on Operator SDK and uses its framework for interacting with Kubernetes and OpenShift environments.
It watches custom resources OneAgent
and monitors the desired state constantly.
The rollout of Dynatrace OneAgent is managed by a DaemonSet initially.
From here on Dynatrace OneAgent Operator controls the lifecycle and keeps track of new versions and triggers updates if required.
Depending of the version of the Dynatrace OneAgent Operator, it supports the following platforms:
Dynatrace OneAgent Operator version | Kubernetes | OpenShift Container Platform |
---|---|---|
master | 1.11+ | 3.11+ |
v0.6.0 | 1.11+ | 3.11+ |
v0.5.4 | 1.11+ | 3.11+ |
v0.4.2 | 1.11+ | 3.11+ |
v0.3.1 | 1.11-1.15 | 3.11+ |
v0.2.1 | 1.9-1.15 | 3.9+ |
Help topic How do I deploy Dynatrace OneAgent as a Docker container? lists compatible image and OneAgent versions in its requirements section.
The Dynatrace OneAgent Operator acts on its separate namespace dynatrace
.
It holds the operator deployment and all dependent objects like permissions, custom resources and
corresponding DaemonSets.
Create neccessary objects and observe its logs:
$ kubectl create namespace dynatrace
$ LATEST_RELEASE=$(curl -s https://api.github.com/repos/dynatrace/dynatrace-oneagent-operator/releases/latest | grep tag_name | cut -d '"' -f 4)
$ kubectl apply -f https://github.com/Dynatrace/dynatrace-oneagent-operator/releases/download/$LATEST_RELEASE/kubernetes.yaml
$ kubectl -n dynatrace logs -f deployment/dynatrace-oneagent-operator
Start by adding a new project as follows:
$ oc adm new-project --node-selector="" dynatrace
If you are installing the Operator on an OpenShift Container Platform 3.11 environment, in order to use the certified OneAgent Operator and OneAgent images from Red Hat Container Catalog (RHCC), you need to provide image pull secrets. The Service Accounts on the openshift.yaml
manifest already have links to the secrets to be created below. Skip this step if you are using OCP 4.x.
# For OCP 3.11
$ oc -n dynatrace create secret docker-registry redhat-connect --docker-server=registry.connect.redhat.com --docker-username=REDHAT_CONNECT_USERNAME --docker-password=REDHAT_CONNECT_PASSWORD --docker-email=unused
$ oc -n dynatrace create secret docker-registry redhat-connect-sso --docker-server=sso.redhat.com --docker-username=REDHAT_CONNECT_USERNAME --docker-password=REDHAT_CONNECT_PASSWORD --docker-email=unused
Finally, for both 4.x and 3.11, we apply the openshift.yaml
manifest to deploy the Operator:
$ LATEST_RELEASE=$(curl -s https://api.github.com/repos/dynatrace/dynatrace-oneagent-operator/releases/latest | grep tag_name | cut -d '"' -f 4)
$ oc apply -f https://github.com/Dynatrace/dynatrace-oneagent-operator/releases/download/$LATEST_RELEASE/openshift.yaml
$ oc -n dynatrace logs -f deployment/dynatrace-oneagent-operator
The rollout of Dynatrace OneAgent is governed by a custom resource of type OneAgent
:
apiVersion: dynatrace.com/v1alpha1
kind: OneAgent
metadata:
# a descriptive name for this object.
# all created child objects will be based on it.
name: oneagent
namespace: dynatrace
spec:
# dynatrace api url including `/api` path at the end
apiUrl: https://ENVIRONMENTID.live.dynatrace.com/api
# disable certificate validation checks for installer download and API communication
skipCertCheck: false
# name of secret holding `apiToken` and `paasToken`
# if unset, name of custom resource is used
tokens: ""
# node selector to control the selection of nodes (optional)
nodeSelector: {}
# https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ (optional)
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
# oneagent installer image (optional)
# certified image from Red Hat Container Catalog for use on OpenShift: registry.connect.redhat.com/dynatrace/oneagent
# defaults to docker.io/dynatrace/oneagent
image: ""
# arguments to oneagent installer (optional)
# https://www.dynatrace.com/support/help/shortlink/oneagent-docker#limitations
args:
- APP_LOG_CONTENT_ACCESS=1
# environment variables for oneagent (optional)
env: []
# resource settings for oneagent pods (optional)
# consumption of oneagent heavily depends on the workload to monitor
# please adjust values accordingly
#resources:
# requests:
# cpu: 100m
# memory: 512Mi
# limits:
# cpu: 300m
# memory: 1.5Gi
# priority class to assign to oneagent pods (optional)
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
#priorityClassName: PRIORITYCLASS
# disables automatic restarts of oneagent pods in case a new version is available
#disableAgentUpdate: false
# when enabled, and if Istio is installed on the Kubernetes environment, then the Operator will create the corresponding
# VirtualService and ServiceEntries objects to allow access to the Dynatrace cluster from the agent.
#enableIstio: false
# DNS Policy for OneAgent pods (optional.) Empty for default (ClusterFirst), more at
# https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
#dnsPolicy: ""
# Labels are customer defined labels for oneagent pods to structure workloads as desired
#labels:
# custom: label
# Name of the service account for the OneAgent (optional)
#serviceAccountName: "dynatrace-oneagent"
Save the snippet to a file or use ./deploy/cr.yaml from this repository and adjust its values accordingly. A secret holding tokens for authenticating to the Dynatrace cluster needs to be created upfront. Create access tokens of type Dynatrace API and Platform as a Service and use its values in the following commands respectively. For assistance please refere to Create user-generated access tokens.
For Openshift, you can change the image from the default available on Quay.io to the one certified on RHCC by setting .spec.image
to registry.connect.redhat.com/dynatrace/oneagent
in the custom resource.
Note: .spec.tokens
denotes the name of the secret holding access tokens. If not specified OneAgent Operator searches for a secret called like the OneAgent custom resource (.metadata.name
).
$ kubectl -n dynatrace create secret generic oneagent --from-literal="apiToken=DYNATRACE_API_TOKEN" --from-literal="paasToken=PLATFORM_AS_A_SERVICE_TOKEN"
$ kubectl apply -f cr.yaml
$ oc -n dynatrace create secret generic oneagent --from-literal="apiToken=DYNATRACE_API_TOKEN" --from-literal="paasToken=PLATFORM_AS_A_SERVICE_TOKEN"
$ oc apply -f cr.yaml
Remove OneAgent custom resources and clean-up all remaining OneAgent Operator specific objects:
$ kubectl delete -n dynatrace oneagent --all
$ kubectl delete -f https://github.com/Dynatrace/dynatrace-oneagent-operator/releases/download/$LATEST_RELEASE/kubernetes.yaml
$ oc delete -n dynatrace oneagent --all
$ oc delete -f https://github.com/Dynatrace/dynatrace-oneagent-operator/releases/download/$LATEST_RELEASE/openshift.yaml
The enableIstio
feature requires to restart the operator if Istio was deployed after deployment of the operator in case istio is installed after deploying the operator.
Background: This happens because the cache maintained by controller-runtime's Kubernetes Client is not dynamic. The bug for same is reported here kubernetes-sigs/controller-runtime#321 and the fix for same is currently a work in progress kubernetes-sigs/controller-runtime#554 .
See HACKING for details on how to get started enhancing Dynatrace OneAgent Operator.
See CONTRIBUTING for details on submitting changes.
Dynatrace OneAgent Operator is under Apache 2.0 license. See LICENSE for details.