Kubectl-debug
Overview
kubectl-debug
is an out-of-tree solution for troubleshooting running pods, which allows you to run a new container in running pods for debugging purpose (examples). The new container will join the pid
, network
, user
and ipc
namespaces of the target container, so you can use arbitrary trouble-shooting tools without pre-installing them in your production container image.
- screenshots
- quick start
- build from source
- port-forward and agentless
- configuration
- roadmap
- authorization
- contribute
Screenshots
Quick Start
Install the kubectl debug plugin
Homebrew:
brew install aylei/tap/kubectl-debug
Download the binary:
export PLUGIN_VERSION=0.1.1
# linux x86_64
curl -Lo kubectl-debug.tar.gz https://github.com/aylei/kubectl-debug/releases/download/v${PLUGIN_VERSION}/kubectl-debug_${PLUGIN_VERSION}_linux_amd64.tar.gz
# macos
curl -Lo kubectl-debug.tar.gz https://github.com/aylei/kubectl-debug/releases/download/v${PLUGIN_VERSION}/kubectl-debug_${PLUGIN_VERSION}_darwin_amd64.tar.gz
tar -zxvf kubectl-debug.tar.gz kubectl-debug
sudo mv kubectl-debug /usr/local/bin/
For windows users, download the latest archive from the release page, decompress the package and add it to your PATH.
(Optional) Install the debug agent DaemonSet
kubectl-debug
requires an agent pod to communicate with the container runtime. In the agentless mode, the agent pod can be created when a debug session starts and to be cleaned up when the session ends.
While convenient, creating pod before debugging can be time consuming. You can install the debug agent DaemonSet in advance to skip this:
kubectl apply -f https://raw.githubusercontent.com/aylei/kubectl-debug/master/scripts/agent_daemonset.yml
# or using helm
helm install -n=debug-agent ./contrib/helm/kubectl-debug
Debug instructions
Try it out!
# kubectl 1.12.0 or higher
kubectl debug -h
# you can omit --agentless to reduce start time if you have installed the debug agent daemonset
# we will omit this flag in the following commands
kubectl debug POD_NAME --agentless
# in case of your pod stuck in `CrashLoopBackoff` state and cannot be connected to,
# you can fork a new pod and diagnose the problem in the forked pod
kubectl debug POD_NAME --fork
# if the node ip is not directly accessible, try port-forward mode
kubectl debug POD_NAME --port-forward --daemonset-ns=kube-system --daemonset-name=debug-agent
# old versions of kubectl cannot discover plugins, you may execute the binary directly
kubect-debug POD_NAME
- You can configure the default arguments to simplify usage, refer to Configuration
- Refer to Examples for practical debugging examples
Build from source
Clone this repo and:
# make will build plugin binary and debug-agent image
make
# install plugin
mv kubectl-debug /usr/local/bin
# build plugin only
make plugin
# build agent only
make agent-docker
port-forward mode And agentless mode
-
port-foward
mode: By default,kubectl-debug
will directly connect with the target host. Whenkubectl-debug
cannot connect totargetHost:agentPort
, you can enableport-forward
mode. Inport-forward
mode, the local machine listens onlocalhost:agentPort
and forwards data to/fromtargetPod:agentPort
. -
agentless
mode: By default,debug-agent
needs to be pre-deployed on each node of the cluster, which consumes cluster resources all the time. Unfortunately, debugging Pod is a low-frequency operation. To avoid loss of cluster resources, theagentless
mode has been added in #31. Inagentless
mode,kubectl-debug
will first startdebug-agent
on the host where the target Pod is located, and thendebug-agent
starts the debug container. After the user exits,kubectl-debug
will delete the debug container andkubectl-debug
will delete thedebug-agent
pod at last.
Configuration
kubectl-debug
uses nicolaka/netshoot as the default image to run debug container, and use bash
as default entrypoint.
You can override the default image and entrypoint with cli flag, or even better, with config file ~/.kube/debug-config
:
# debug agent listening port(outside container)
# default to 10027
agentPort: 10027
# whether using agentless mode
# default to false
agentless: true
# namespace of debug-agent pod, used in agentless mode
# default to 'default'
agentPodNamespace: default
# prefix of debug-agent pod, used in agentless mode
# default to 'debug-agent-pod'
agentPodNamePrefix: debug-agent-pod
# image of debug-agent pod, used in agentless mode
# default to 'aylei/debug-agent:latest'
agentImage: aylei/debug-agent:latest
# daemonset name of the debug-agent, used in port-forward
# default to 'debug-agent'
debugAgentDaemonset: debug-agent
# daemonset namespace of the debug-agent, used in port-forwad
# default to 'default'
debugAgentNamespace: kube-system
# whether using port-forward when connecting debug-agent
# default false
portForward: true
# image of the debug container
# default as showed
image: nicolaka/netshoot:latest
# start command of the debug container
# default ['bash']
command:
- '/bin/bash'
- '-l'
If the debug-agent is not accessible from host port, it is recommended to set portForward: true
to using port-forawrd mode.
PS: kubectl-debug
will always override the entrypoint of the container, which is by design to avoid users running an unwanted service by mistake(of course you can always do this explicitly).
Authorization
Currently, kubectl-debug
reuse the privilege of the pod/exec
sub resource to do authorization, which means that it has the same privilege requirements with the kubectl exec
command.
Roadmap
kubectl-debug
is supposed to be just a troubleshooting helper, and is going be replaced by the native kubectl debug
command when this proposal is implemented and merged in the future kubernetes release. But for now, there is still some works to do to improve kubectl-debug
.
- Security: currently,
kubectl-debug
do authorization in the client-side, which should be moved to the server-side (debug-agent) - More unit tests
- More real world debugging example
- e2e tests
If you are interested in any of the above features, please file an issue to avoid potential duplication.
Contribute
Feel free to open issues and pull requests. Any feedback is highly appreciated!
Acknowledgement
This project would not be here without the effort of our contributors, thanks!