New Relic integration for Kubernetes
New Relic Integration for Kubernetes instruments the container orchestration layer by reporting metrics from Kubernetes objects. It gives you visibility about Kubernetes namespaces, deployments, replica sets, nodes, pods, and containers. Metrics are collected from different sources.
- kube-state-metrics service provides information about state of Kubernetes objects like namespace, replicaset, deployments and pods (when they are not in running state)
/stats/summary
kubelet endpoint gives information about network, errors, memory and CPU usage/pods
kubelet endpoint provides information about state of running pods and containers/metrics/cadvisor
cAdvisor endpoint provides missing data that is not included in the previous sources.- Node labels are retrieved from the k8s API server.
Check documentation in order to find out more how to install and configure the integration, learn what metrics are captured and how to view them.
Table of contents
- Table of contents
- Installation
- Usage
- Running the integration against a static data set
- In cluster development
- Running OpenShift locally using CodeReady Containers
- Support
- Contributing
- License
Installation
Firstly check compatibility and requirements and then follow the installation steps. For troubleshooting help, see Not seeing data, or Error messages.
Usage
Check how to find and use data and description of all captured data.
Running the integration against a static data set
See cmd/kubernetes-static/readme.md for more details.
In cluster development
Prerequisites
For in cluster development process Minikube and Skaffold tools are used.
Configuration
- Copy the daemonset file
deploy/newrelic-infra.yaml
todeploy/local.yaml
. - Edit the file and set the following value as container image:
newrelic/infrastructure-k8s-dev
.
containers:
- name: newrelic-infra
image: newrelic/infrastructure-k8s-dev
resources:
- Edit the file and specify the following
CLUSTER_NAME
andNRIA_LICENSE_KEY
on theenv
section.
env:
- name: "CLUSTER_NAME"
value: "<YOUR_CLUSTER_NAME>"
- name: "NRIA_LICENSE_KEY"
value: "<YOUR_LICENSE_KEY>"
Run
Run make deploy-dev
. This will compile your integration binary with compatibility for the container OS architecture, build a temporary docker image and finally deploy it to your Minikube.
Then you can view your data or run the integration standalone. To do so follow the steps:
- Run
NR_POD_NAME=$(kubectl get pods -l name=newrelic-infra -o jsonpath='{.items[0].metadata.name}')
This will retrieve the name of a pod where the Infrastructure agent and Kuberntetes Infrastructure Integration are installed.
- Enter to the pod
kubectl exec -it $NR_POD_NAME -- /bin/bash
- Execute the Kubernetes integration
/var/db/newrelic-infra/newrelic-integrations/bin/nri-kubernetes -pretty
Tests
For running unit tests, use
make test
For running e2e tests locally, use:
CLUSTER_NAME=<your-cluster-name> NR_LICENSE_KEY=<your-license-key> make e2e
This make target is executing go run e2e/cmd/e2e.go
. You could execute that
command with --help
flag to see all the available options.
Running OpenShift locally using CodeReady Containers
For running and testing locally with OpenShift 4.x and above, CodeReady Containers can be used. Instructions are provided below. For running and testing locally with Openshift 3.x and prior, minishift can be used.
Using CodeReady Containers
- Login to the RedHat Customer Portal with your RedHat account
- Follow the instructions here to download and install CRC
- When you get to the
crc start
command, if you encounter errors related to timeouts when attempting to check DNS resolution from within the guest VM, proceed to stop the VM (crc stop
) and then restart it withcrc start -n 8.8.8.8
. - Make sure to follow the steps for accessing the
oc
command via theCLI
including running thecrc oc-env
command and using theoc login ...
command to login to the cluster.
Accessing and exposing the internal Openshift image registry
The local CRC development flow depends on the Openshift image registry being exposed outside the cluster and being accessible to a valid Openshift user. To achieve this, perform the following steps.
- Follow these steps to add the
registry-viewer
andregistry-editor
role to thedeveloper
user. - Follow these steps to expose the registry outside the cluster using the default route.
CRC configuration
Configuration is generally the same as above with the following differences.
- The local configuration file used by
skaffold
islocal-openshift.yaml
- In addition to setting the
CLUSTER_NAME
andNRIA_LICENSE_KEY
, you will need to uncomment the*_ENDPOINT_URL
variables. The defaults set at the time of this writing (2/19/2020) are properly set for the default CRC environment. - Etcd in Openshift requires mTLS. This means you have to follow our documentation here in order to setup client cert auth. The only difference is how you obtain the client cert/key and cacert. The default CRC setup does not provide the private key of the root CA and therefore you can't use your own cert/key pair since you can't sign the CSR. However, they do already provide a pre-generated cert/key pair that "peers" can use. Following is how you can get this info.
- Use
scp -i ~/.crc/machines/crc/id_rsa core@$(crc ip):PATH_TO_FILE
to copy the following files to your local machine- The peer/client cert:
/etc/kubernetes/static-pod-resources/etcd-member/system:etcd-metric:etcd-0.crc.testing.crt
- The peer/client private key:
/etc/kubernetes/static-pod-resources/etcd-member/system:etcd-metric:etcd-0.crc.testing.key
- The root CA cert:
/etc/kubernetes/static-pod-resources/etcd-member/metric-ca.crt
- The peer/client cert:
- Rename
system:etcd-metric:etcd-0.crc.testing.crt
tocert
- Rename
system:etcd-metric:etcd-0.crc.testing.key
tokey
- Rename
metric-ca.crt
tocacert
- Carry on with the steps in our documentation.
- Use
Skaffold deployment
To deploy the integration to CRC via skaffold
, run skaffold run -p openshift
.
Manual deployment
The skaffold
deployment doesn't always seem to work reliably. In case you need to deploy manually, perform the following steps.
Perform the following steps once per terminal session.
oc login -u kubeadmin -p PASSWORD_HERE https://api.crc.testing:6443
OCHOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')
oc login -u developer -p developer https://api.crc.testing:6443
docker login -u developer -p $(oc whoami -t) $OCHOST
oc login -u kubeadmin -p PASSWORD_HERE https://api.crc.testing:6443
Perform the following steps each time you want to deploy.
make compile-dev
docker build . -t infrastructure-k8s-dev
docker tag infrastructure-k8s-dev default-route-openshift-image-registry.apps-crc.testing/default/infrastructure-k8s-dev
docker push default-route-openshift-image-registry.apps-crc.testing/default/infrastructure-k8s-dev
oc apply -f deploy/local-openshift.yaml
Tips
- If at any point you need to login to the guest VM, use the following command:
ssh -i ~/.crc/machines/crc/id_rsa core@$(crc ip)
- During testing it seemed that occassionally the cluster would stop reporting data for no reason (especially after my machine wakes up from sleep mode). If this happens, use the Microsoft solution (just restart the cluster).
Support
New Relic hosts and moderates an online forum where customers can interact with New Relic employees as well as other customers to get help and share best practices. Like all official New Relic open source projects, there's a related Community topic in the New Relic Explorers Hub. You can find this project's topic/threads here:
https://discuss.newrelic.com/t/new-relic-kubernetes-open-source-integration/109093
Contributing
We encourage your contributions to improve the New Relic Integration for Kubernetes! Keep in mind when you submit your pull request, you'll need to sign the CLA via the click-through using CLA-Assistant. You only have to sign the CLA one time per project. If you have any questions, or to execute our corporate CLA, required if your contribution is on behalf of a company, please drop us an email at opensource@newrelic.com.
License
New Relic Integration for Kubernetes is licensed under the Apache 2.0 License.