This helm chart is designed to deploy functionality that automatically saves core dumps from any public cloud kuberenetes service provider or RedHat OpenShift Kubernetes Service to an S3 compatible storage service.
The Helm cli to run the chart
An S3 compatible object storage solution such as IBM Cloud Object Storage
A CRIO compatible container runtime on the kubernetes hosts. If you service provider uses something else we will willingly recieve patches to support them.
git clone https://github.com/IBM/core-dump-handler
cd core-dump-handler/charts
helm install core-dump-handler . --create-namespace --namespace observe \
--set daemonset.s3AccessKey=XXX --set daemonset.s3Secret=XXX \
--set daemonset.s3BucketName=XXX --set daemonset.s3Region=XXX
Where the --set
options are configuration for your S3 compatible provider
Details for IBM Cloud are available
As the agent runs in privileged mode the following command is needed on OpenShift.
-z
is the service account name and -n
is the namespace.
oc adm policy add-scc-to-user privileged -z core-dump-admin -n observe
Some OpenShift services run on RHEL7 if that's the case then add the folowing option to the helm command or update the values.yaml.
This will be apparent if you see errors relating to glibc in the output.log in the host directory core folder which can be accessed from the agent pod at /core-dump-handler/core
--set daemonset.vendor=rhel7
- Create a container
$ kubectl run -i -t busybox --image=busybox --restart=Never
- Login to the container
$ kubectl exec -it busybox -- /bin/sh
- Generate a core dump by sending SIGSEGV to the terminal process.
# kill -11 $$
-
View the core dump tar file in the configured Cloud Object Store service instance.
-
Troubleshoot by looking at the core-dump-composer logs in the observe namespace
This is a matrix of confirmed test targets. Please PR environments that are also known to work
Provider | Product | Version | Validated? | Working? | Notes |
IBM | IKS | 1.19 | Yes | Yes | |
IBM | ROKS | 4.6 | Yes | Yes | Must enable privileged policy See OpenShift Section |
Microsoft | AKS | 1.19 | Yes | Yes | |
Microsoft | ARO | ? | No | Unknown | |
AWS | EKS | 1.21 | Yes | No crictl client in the default AMI means that the metadata won't be captured | |
AWS | ROSA | ? | No | Unknown | |
GKE | 1.19 | Yes | Possible | Default HostPath Fails A local PV needs to be defined |
Core Dumps are a critical part of observability.
As systems become more distributed core dumps offer teams a non-invasive approach to understanding why programs are malfunctioning in any environment they are deployed to.
Core Dumps are useful in a wide number of scenarios but they are very relevant in the following cases:
-
The process exits without a useful stack trace
-
The process runs out of memory
-
An application doesn’t behave as expected
The traditional problems with core dumps are:
-
Overhead of managing the dumps
-
Dump Analysis required specific tooling that wasn't readily available on the developers machine.
-
Managing Access to the dumps as they can contain sensitive information.
This chart aims to tackle the problems surrounding core dumps by leveraging common platforms (K8s, ROKS and Object Storage) in a cloud environment to pick up the heavy lifting.
The chart deploys two processes:
-
The agent manages the updating of
/proc/sys/kernel/*
configuration, deploys the composer service and uploads the core dumps zipfile created by the composer to an object storage instance. -
The composer handles the processing of a core dump and creating runtime, container coredump and image JSON documents from CRICTL and inserting them into a single zip file. The zip file is stored on the local file system of the node for the agent to upload.
When you install the IBM Cloud Core Dump Handler Helm chart, the following Kubernetes resources are deployed into your Kubernetes cluster:
-
Namespace: A specific namespace is created to install the components into - defaults to ibm-observe
-
Handler Daemonset: The daemonset deploys a pod on every worker node in your cluster. The daemonset contains configuration to enable the elevated process to define the core pattern to place the core dump into object storage as well as gather pod information if available.
-
Privileged Policy: The daemonset configures the host node so priviledges are required.
-
Service Account: Standard Service account to run the daemonset
-
Volume Claims: For copying the composer to the host and enabling access to the generated core dumps
-
Cluster Role: Created with an event resource and create verb and associated with the service account.
To install the Helm chart in your cluster, you must have the Administrator platform role.
This chart deploys privileged kubernetes daemonset with the following implications:
-
the automatic creation of privileged container per kubernetes node capable of reading core files querying the crictl for pod info.
-
The daemonset uses hostpath feature interacting with the underlying Linux OS.
-
The composer binary is deployed and ran on the host server
-
Core dumps can contain sensitive runtime data and the storage bucket access must be managed accordingly.
-
Object storage keys are stored as environment variables in the daemonset
The IBM Cloud Core Dump Handler requires the following resources on each worker node to run successfully:
- CPU: 0.2 vCPU
- Memory: 128MB
- Delete the chart. Don't worry this won't impact the data stored in object storage.
$ helm delete coredump-handler . --namespace observe
- Ensure the persitent volume for
host-name
are deleted before continuing
$ kubectl get pv -n observe
- Install the chart using the same bucket name as per the first install but tell the chart not to creat it.
$ helm install coredump-handler . --namespace observe
helm delete coredump-handler -n observe
The services are written in Rust using rustup.
-
Build the image
docker build -t YOUR_TAG_NAME .
-
Push the image to your container registry
-
Update the container in the
values.yaml
file to use it.
image:
repository: YOUR_TAG_NAME
or run the helm install command with the --set image.repository=YOUR_TAG_NAME
.