- Prometheus, deployed into the cluster as a StatefulSet with 2 replicas that use Persistent Volumes. In addition, a preconfigured set of Prometheus Alerts, Rules, and Jobs will be stored as a ConfigMap.
- Alertmanager, installed as a StatefulSet with 2 replicas.
- Grafana, installed as a StatefulSet with one replica. In addition, a preconfigured set of Dashboards generated by kubernetes-mixin will be stored as a ConfigMap.
- kube-state-metrics, installed as a Deployment with one replica.
- node-exporter, installed as a DaemonSet.
Before you begin, you'll need the following tools installed in your local development environment:
- The
kubectl
command-line interface installed on your local machine and configured to connect to your cluster. You can read more about installing and configuringkubectl
in its official documentation. - The git version control system installed on your local machine.
- The Coreutils base64 tool installed on your local machine. If you're using a Linux machine, this will most likely already be installed. If you're using OS X, you can use
openssl base64
, which comes installed by default.
To start, clone this repo on your local machine:
git clone https://github.com/frdeng/kube-monitoring.git
Next, move into the cloned repository:
cd kube-monitoring
Set the APP_INSTANCE_NAME
and NAMESPACE
environment variables, which will be used to configure a unique name for the stack's components and configure the Namespace into which the stack will be deployed:
export APP_INSTANCE_NAME=frank-cluster-monitoring
export NAMESPACE=monitoring
Use the base64
command to base64-encode a secure Grafana password of your choosing:
export GRAFANA_GENERATED_PASSWORD="$(echo -n 'your_grafana_password' | base64)"
You must have a storage class created for dynamic persistent volume provisioning, if you don't have one, you can use the nfs-provisioner. Fetch the stoarge class name:
export STORAGE_CLASS=$(kubectl get storageclasses -o jsonpath="{.items[0].metadata.name}")
If you're using OS X, you can use the openssl base64
command which comes installed by default.
If you'd like to deploy the stack into a Namespace other thandefault
, run the following command to create a new Namespace:
kubectl create namespace "$NAMESPACE"
Now, use awk
and envsubst
to fill in the APP_INSTANCE_NAME
, NAMESPACE
, STORAGE_CLASS
and GRAFANA_GENERATED_PASSWORD
variables in the repo's manifest files. After substituting in the variable values, the files will be combined into a master manifest file called $APP_INSTANCE_NAME_manifest.yaml
.
awk 'FNR==1 {print "---"}{print}' manifest/* \
| envsubst '$APP_INSTANCE_NAME $NAMESPACE $STORAGE_CLASS \
$GRAFANA_GENERATED_PASSWORD' > "${APP_INSTANCE_NAME}_manifest.yaml"
Now, use kubectl apply -f
to apply the manifest and create the stack in the Namespace you configured:
kubectl apply -f "${APP_INSTANCE_NAME}_manifest.yaml" --namespace "$NAMESPACE"
You can use kubectl get all -n $NAMESPACE
to monitor deployment status.
Once the stack is up and running, you can access Prometheus by either patching the Prometheus ClusterIP Service to create a Load Balancer, or a Node Port, or by forwarding a local port.
If your kubernetes cluster supports Load Balancer, create a LoadBalancer Service for Prometheus, use kubectl patch
to update the existing Prometheus Service in-place:
kubectl patch svc "$APP_INSTANCE_NAME-prometheus" \
--namespace "$NAMESPACE" \
-p '{"spec": {"type": "LoadBalancer"}}'
Once the Load Balancer has been created and assigned an external IP address, you can fetch this external IP using the following commands:
SERVICE_IP=$(kubectl get svc $APP_INSTANCE_NAME-prometheus \
--namespace $NAMESPACE \
--output jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo "http://${SERVICE_IP}/"
If your kubernetes cluster doesn't support Load Balancer, you can create Note Port service for Prometheus, use kubectl patch
to update the existing Prometheus Service in-place.
kubectl patch svc "$APP_INSTANCE_NAME-prometheus" \
--namespace "$NAMESPACE" \
-p '{"spec": {"type": "NodePort"}}'
Once the Node Port has been created, you can fetch the node port using the following commands:
NODE_PORT=$(kubectl get svc $APP_INSTANCE_NAME-prometheus \
--namespace $NAMESPACE \
--output jsonpath='{.spec.ports[0].nodePort}')
echo "$NODE_PORT"
You can now access the Prometheus UI at http://<external IP of a worker node>:$NODE_PORT/graph
.
Once the stack is up and running, you can access Grafana by either patching the Grafana ClusterIP Service to create a DigitalOcean Load Balancer, or by forwarding a local port.
If your kubernetes cluster supports Load Balancer, create a LoadBalancer Service for Grafana, use kubectl patch
to update the existing Grafana Service in-place:
kubectl patch svc "$APP_INSTANCE_NAME-grafana" \
--namespace "$NAMESPACE" \
-p '{"spec": {"type": "LoadBalancer"}}'
Once the Load Balancer has been created and assigned an external IP address, you can fetch this external IP using the following commands:
SERVICE_IP=$(kubectl get svc $APP_INSTANCE_NAME-grafana \
--namespace $NAMESPACE \
--output jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo "http://${SERVICE_IP}/"
If your kubernetes cluster doesn't support Load Balancer, you can create Note Port service for Grafana, use kubectl patch
to update the existing Grafana Service in-place.
kubectl patch svc "$APP_INSTANCE_NAME-grafana" \
--namespace "$NAMESPACE" \
-p '{"spec": {"type": "NodePort"}}'
Once the Node Port has been created, you can fetch the node port using the following commands:
NODE_PORT=$(kubectl get svc $APP_INSTANCE_NAME-grafana \
--namespace $NAMESPACE \
--output jsonpath='{.spec.ports[0].nodePort}')
echo "$NODE_PORT"
You can now access the Grafana UI at http://<external IP of a worker node>:$NODE_PORT/graph
.
If you don't want to expose the Grafana Service externally, you can also forward local port 3000 into the cluster using kubectl port-forward
. To learn more about forwarding ports into a Kubernetes cluster, consult Use Port Forwarding to Access Applications in a Cluster.
kubectl port-forward --namespace ${NAMESPACE} ${APP_INSTANCE_NAME}-grafana-0 3000
You can now access the Grafana UI locally at http://localhost:3000/
.
At this point, you should be able to access the Grafana UI. To log in, use the default username admin
(if you haven't modified the admin-user
parameter), and password you configured above.