kubernetes/minikube

restart: waiting for k8s-app=kube-proxy: timed out waiting for the condition

serverok opened this issue Β· 41 comments

When i minikube start, i get error

boby@sok-01:~$ minikube start
πŸ˜„  minikube v0.35.0 on linux (amd64)
πŸ’‘  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
πŸ”„  Restarting existing virtualbox VM for "minikube" ...
βŒ›  Waiting for SSH access ...
πŸ“Ά  "minikube" IP address is 192.168.99.100
🐳  Configuring Docker as the container runtime ...
✨  Preparing Kubernetes environment ...
🚜  Pulling images required by Kubernetes v1.13.4 ...
πŸ”„  Relaunching Kubernetes v1.13.4 using kubeadm ... 
βŒ›  Waiting for pods: apiserver proxyπŸ’£  Error restarting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰  https://github.com/kubernetes/minikube/issues/new
boby@sok-01:~$ 

I am using Ubuntu 18.04

Attached minikube logs

minikube-logs.txt

I have the same issue here... only difference is I am running on Ubuntu 18.10.

nonrootuser $ minikube start
πŸ˜„  minikube v0.35.0 on linux (amd64)
πŸ’‘  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
πŸƒ  Re-using the currently running virtualbox VM for "minikube" ...
βŒ›  Waiting for SSH access ...
πŸ“Ά  "minikube" IP address is 192.168.99.100
🐳  Configuring Docker as the container runtime ...
✨  Preparing Kubernetes environment ...
🚜  Pulling images required by Kubernetes v1.13.4 ...
πŸ”„  Relaunching Kubernetes v1.13.4 using kubeadm ... 
βŒ›  Waiting for pods: apiserver proxyπŸ’£  Error restarting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰  https://github.com/kubernetes/minikube/issues/new

minikube-logs.txt

I encountered the same problem after moved to v0.35.0 to fix the other problem.

πŸ˜„ minikube v0.35.0 on darwin (amd64)
πŸ‘ minikube will upgrade the local cluster from Kubernetes 1.13.3 to 1.13.4
πŸ’‘ Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
πŸƒ Re-using the currently running virtualbox VM for "minikube" ...
βŒ› Waiting for SSH access ...
πŸ“Ά "minikube" IP address is 192.168.99.100
🐳 Configuring Docker as the container runtime ...
✨ Preparing Kubernetes environment ...
πŸ’Ύ Downloading kubeadm v1.13.4
πŸ’Ύ Downloading kubelet v1.13.4
🚜 Pulling images required by Kubernetes v1.13.4 ...
πŸ”„ Relaunching Kubernetes v1.13.4 using kubeadm ...
βŒ› Waiting for pods: apiserver

πŸ’£  Error restarting cluster: wait: waiting for component=kube-apiserver: timed out waiting for the condition

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰ https://github.com/kubernetes/minikube/issues/new

I am on v10.14.2

I did minikube delete and started it again and it worked. It did sit on Waiting for pods for a few minutes but did get past it (unlike before, I left it running for a long time without any progress). HTH. I am also running 0.35.0.

Having the same problem, just installed 0.35.0 and can't get it to work.
tried minikube delete and start again but no luck

πŸ˜„ minikube v0.35.0 on darwin (amd64)
πŸ”₯ Creating virtualbox VM (CPUs=4, Memory=8192MB, Disk=20000MB) ...
πŸ“Ά "minikube" IP address is 192.168.99.102
🐳 Configuring Docker as the container runtime ...
✨ Preparing Kubernetes environment ...
🚜 Pulling images required by Kubernetes v1.13.4 ...
πŸš€ Launching Kubernetes v1.13.4 using kubeadm ...
βŒ› Waiting for pods: apiserver proxyπŸ’£ Error starting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰ https://github.com/kubernetes/minikube/issues/new

@serverok - I've seen this to when resuming a previously setup VM, but haven't been able to replicate it reliably. Do you mind attaching the output of the following command:

minikube ssh 'docker logs $(docker ps -a -f name=k8s_kube-proxy --format={{.ID}})'

I suspect this can be resolved by running minikube delete, but it's almost certainly going to come back at some random point in the future.

For other folks who are also running into this in a way that does not say "Error restarting", I suggest opening a new bug report as there are likely to be multiple causes. Feel free to reference #3843 though.

Thanks!

Update: Fixed command-line.

I have the same issue when running minikube with Virtualbox and the starting commands:

I run the minikube command as my user (not root).
minikube start --memory=8192 --cpus=4 --kubernetes-version=v1.13.4 --extra-config=controller-manager.cluster-signing-cert-file="/var/lib/localkube/certs/ca.crt" --extra-config=controller-manager.cluster-signing-key-file="/var/lib/localkube/certs/ca.key"

I run Ubuntu 18.10 inside a VMWare Workstation virtualization engine.
I have CPU Intel VT-x/EPT and IOMMU enabled on the VM and in the BIOS of the baremetal host.

The host has 32G of RAM and I allocated 16G RAM and 4 vCPU to the Ubuntu VM. I setup the minikube to start with 8G RAM and 4 CPU.

@ocontant - If you don't mind, please open a new issue as it may have different root causes. Many of those command-line options seem very strange, as localkube is no longer used.

I had a bug in my example command: please use:

minikube ssh 'docker logs $(docker ps -a -f name=k8s_kube-proxy --format={{.ID}})'

Thanks!

@ocontant - If you don't mind, please open a new issue as it may have different root causes. Many of those command-line options seem very strange, as localkube is no longer used.

I had a bug in my example command: please use:

minikube ssh 'docker logs $(docker ps -a -f name=k8s_kube-proxy --format={{.ID}})'

Thanks!

The command are from the istio page reference how to install istio in minikube. Pershaps their documention is a bit outdate. I will try without those parameters and see.

@tstromberg Seems like removing the extra parameters fixed the issue. My instance of minikube started correctly.

Do you have a way to inform istio upstream that their documentation is deprecated and they should update it? Reference to the documentation: [(https://istio.io/docs/setup/kubernetes/platform-setup/minikube/)]

@tstromberg

There is no such container, here is the result

boby@sok-01:~$ minikube start
πŸ˜„  minikube v0.35.0 on linux (amd64)
πŸ’‘  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
πŸ”„  Restarting existing virtualbox VM for "minikube" ...
βŒ›  Waiting for SSH access ...
πŸ“Ά  "minikube" IP address is 192.168.99.100
🐳  Configuring Docker as the container runtime ...
✨  Preparing Kubernetes environment ...
🚜  Pulling images required by Kubernetes v1.13.4 ...
πŸ”„  Relaunching Kubernetes v1.13.4 using kubeadm ... 
βŒ›  Waiting for pods: apiserver proxyπŸ’£  Error restarting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰  https://github.com/kubernetes/minikube/issues/new
boby@sok-01:~$ minikube ssh 'docker logs $(docker ps -a -f name=k8s_kube-proxy --format={{.ID}})'
"docker logs" requires exactly 1 argument.
See 'docker logs --help'.

Usage:  docker logs [OPTIONS] CONTAINER

Fetch the logs of a container
ssh: Process exited with status 1
boby@sok-01:~$ 

Attached result of docker ps and docker ps -a inside "minikube ssh"

docker-ps.txt

The same problem with minikube on Windows Hyper-V

C:\Program Files\Docker>minikube start --vm-driver hyperv --hyperv-virtual-switch "MinukubeNet"
o minikube v0.35.0 on windows (amd64)
i Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
: Re-using the currently running hyperv VM for "minikube" ...
: Waiting for SSH access ...

  • "minikube" IP address is 10.6.172.121
  • Configuring Docker as the container runtime ...
  • Preparing Kubernetes environment ...
  • Pulling images required by Kubernetes v1.13.4 ...
    : Relaunching Kubernetes v1.13.4 using kubeadm ...
    : Waiting for pods: apiserver proxy! Error restarting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition
  • Sorry that minikube crashed. If this was unexpected, we would love to hear from you:

I have same problem
minikube start
πŸ˜„ minikube v0.35.0 on darwin (amd64)
πŸ”₯ Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
πŸ“Ά "minikube" IP address is 192.168.99.105
🐳 Configuring Docker as the container runtime ...
✨ Preparing Kubernetes environment ...
🚜 Pulling images required by Kubernetes v1.13.4 ...
πŸš€ Launching Kubernetes v1.13.4 using kubeadm ...
βŒ› Waiting for pods: apiserverπŸ’£ Error starting cluster: wait: waiting for component=kube-apiserver: timed out waiting for the condition

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰ https://github.com/kubernetes/minikube/issues/new

minikube ssh 'docker logs $(docker ps -a -f name=k8s_kube-proxy --format={{.ID}})'
W0314 16:41:42.848332 1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
I0314 16:41:42.864211 1 server_others.go:148] Using iptables Proxier.
W0314 16:41:42.864315 1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0314 16:41:42.864514 1 server_others.go:178] Tearing down inactive rules.
I0314 16:41:42.988593 1 server.go:483] Version: v1.13.4
I0314 16:41:43.000734 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0314 16:41:43.000933 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0314 16:41:43.002120 1 conntrack.go:83] Setting conntrack hashsize to 32768
I0314 16:41:43.006628 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0314 16:41:43.006688 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0314 16:41:43.007273 1 config.go:102] Starting endpoints config controller
I0314 16:41:43.007294 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0314 16:41:43.007310 1 config.go:202] Starting service config controller
I0314 16:41:43.007313 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0314 16:41:43.107532 1 controller_utils.go:1034] Caches are synced for endpoints config controller
I0314 16:41:43.107603 1 controller_utils.go:1034] Caches are synced for service config controller

dmag1 commented

I do get the same issue with minikube 0.35.0 on Windows 10.

Noted that storage-provisioner status would change from ERROR to CrashLoopBackOff at the same time.

kubectl get pods -n kube-system

NAME READY STATUS RESTARTS AGE
etcd-minikube 1/1 Running 6 9h
kube-addon-manager-minikube 1/1 Running 5 9h
kube-apiserver-minikube 1/1 Running 8 9h
kube-controller-manager-minikube 1/1 Running 7 8h
kube-scheduler-minikube 1/1 Running 8 9h
storage-provisioner 0/1 CrashLoopBackOff 100 9h

minikubeout.txt

I have same problem that minikube doesn't create kube-proxy

$ docker ps -a | grep proxy
$ 

It seems because of #3774

https://github.com/kubernetes/minikube/pull/3774/files#diff-fb1a98aa9faed8065953a5fbb1a92e8fR287

At this time, kube-proxy is not created by kubeadm

$ sudo kubeadm init phase control-plane all --config /var/lib/kubeadm.yaml
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"

The "init" command executes the following phases:

preflight                  Run master pre-flight checks
kubelet-start              Writes kubelet settings and (re)starts the kubelet
certs                      Certificate generation
  /ca                        Generates the self-signed Kubernetes CA to provision identities for other Kubernetes components
  /apiserver                 Generates the certificate for serving the Kubernetes API
  /apiserver-kubelet-client  Generates the Client certificate for the API server to connect to kubelet
  /etcd-ca                   Generates the self-signed CA to provision identities for etcd
  /etcd-server               Generates the certificate for serving etcd
  /etcd-peer                 Generates the credentials for etcd nodes to communicate with each other
  /etcd-healthcheck-client   Generates the client certificate for liveness probes to healtcheck etcd
  /apiserver-etcd-client     Generates the client apiserver uses to access etcd
  /front-proxy-ca            Generates the self-signed CA to provision identities for front proxy
  /front-proxy-client        Generates the client for the front proxy
  /sa                        Generates a private key for signing service account tokens along with its public key
kubeconfig                 Generates all kubeconfig files necessary to establish the control plane and the admin kubeconfig file
  /admin                     Generates a kubeconfig file for the admin to use and for kubeadm itself
  /kubelet                   Generates a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes
  /controller-manager        Generates a kubeconfig file for the controller manager to use
  /scheduler                 Generates a kubeconfig file for the scheduler to use
control-plane              Generates all static Pod manifest files necessary to establish the control plane
  /apiserver                 Generates the kube-apiserver static Pod manifest
  /controller-manager        Generates the kube-controller-manager static Pod manifest
  /scheduler                 Generates the kube-scheduler static Pod manifest
etcd                       Generates static Pod manifest file for local etcd.
  /local                     Generates the static Pod manifest file for a local, single-node local etcd instance.
upload-config              Uploads the kubeadm and kubelet configuration to a ConfigMap
  /kubeadm                   Uploads the kubeadm ClusterConfiguration to a ConfigMap
  /kubelet                   Uploads the kubelet component config to a ConfigMap
mark-control-plane         Mark a node as a control-plane
bootstrap-token            Generates bootstrap tokens used to join a node to a cluster
addon                      Installs required addons for passing Conformance tests
  /coredns                   Installs the CoreDNS addon to a Kubernetes cluster
  /kube-proxy                Installs the kube-proxy addon to a Kubernetes cluster

Or it's because kube-proxy is moved by kubeadm

Even I have the same problem on mac after minikube upgrade..any idea on what might be causing it.

bash-5.0$ minikube start
πŸ˜„ minikube v0.35.0 on darwin (amd64)
πŸ”₯ Creating virtualbox VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
πŸ“Ά "minikube" IP address is 192.168.99.100
🐳 Configuring Docker as the container runtime ...
✨ Preparing Kubernetes environment ...
🚜 Pulling images required by Kubernetes v1.13.4 ...
πŸš€ Launching Kubernetes v1.13.4 using kubeadm ...
βŒ› Waiting for pods: apiserverπŸ’£ Error starting cluster: wait: waiting for component=kube-apiserver: timed out waiting for the condition

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰ https://github.com/kubernetes/minikube/issues/new

bash-5.0$ minikube delete
πŸ”₯ Deleting "minikube" from virtualbox ...
πŸ’” The "minikube" cluster has been deleted.

bash-5.0$ minikube start
πŸ˜„ minikube v0.35.0 on darwin (amd64)
πŸ”₯ Creating virtualbox VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
πŸ“Ά "minikube" IP address is 192.168.99.102
🐳 Configuring Docker as the container runtime ...
✨ Preparing Kubernetes environment ...
🚜 Pulling images required by Kubernetes v1.13.4 ...
πŸš€ Launching Kubernetes v1.13.4 using kubeadm ...
βŒ› Waiting for pods: apiserverπŸ’£ Error starting cluster: wait: waiting for component=kube-apiserver: timed out waiting for the condition

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰ https://github.com/kubernetes/minikube/issues/new

I had the same issue but minikube delete solved it.

same issue here on Windows 10 with Hyper V running 0.35.0

same issue running on win10 / kubernetes v1.13.4 / minikube v0.35.0 / hyperv (default switch)

I've hit the same issue after my first attempt failed (because I needed to set docker proxy settings). Second and consequent attempts would fail.

I added the minikube ip address to my NO_PROXY environment variable, and after a delete it would work.

Minikube 0.35.0, Ubuntu 18.04, amd64, virtualbox (set to 16gb/50000mb).

Thanks.NO_PROXY env works like a charm.even machine restart would work( if vpn is disconnected ) after minikube delete as it would reset the network.

This is also the case with the latest version, details are #3936

The solution is running minikube delete.

I am facing the same issue.

:   Waiting for pods: apiserver proxy
!   Error restarting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition

And minikube delete doesn't work and neither NO_PROXY.

I did delete all the virtual boxes and restart the minikube and it worked

minicube delete and minikube start worked.
minikube delete
πŸ”₯ Deleting "minikube" from virtualbox ...
πŸ’” The "minikube" cluster has been deleted.

minikube start
πŸ˜„ minikube v1.0.0 on darwin (amd64)
🀹 Downloading Kubernetes v1.14.0 images in the background ...
πŸ”₯ Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
πŸ“Ά "minikube" IP address is 192.168.99.101
🐳 Configuring Docker as the container runtime ...
🐳 Version of container runtime is 18.06.2-ce
βŒ› Waiting for image downloads to complete ...
✨ Preparing Kubernetes environment ...
🚜 Pulling images required by Kubernetes v1.14.0 ...
πŸš€ Launching Kubernetes v1.14.0 using kubeadm ...
βŒ› Waiting for pods: apiserver proxy etcd scheduler controller dns
πŸ”‘ Configuring cluster permissions ...
πŸ€” Verifying component health .....
πŸ’— kubectl is now configured to use "minikube"
πŸ„ Done! Thank you for using minikube!

@serverok please close this issue if this works for you.

Got it work after minikube delete and upgrading to v1..0.0

Confirming minikube delete && minikube start worked on my Mac:

system_profiler SPSoftwareDataType
Software:

    System Software Overview:

      System Version: macOS 10.13.6 (17G6030)
      Kernel Version: Darwin 17.7.0
      Boot Volume: Macintosh HD
      Boot Mode: Normal
      Computer Name: xxx
      User Name: Michael Chirico (michael.chirico)
      Secure Virtual Memory: Enabled
      System Integrity Protection: Enabled
      Time since boot: 1:00

Thanks!

Please use #3850 for

I was helped...

MacBook-Air-Esbol:My esbolmoldrahmetov$ minikube start
πŸ˜„ minikube v0.35.0 on darwin (amd64)
πŸ’‘ Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
πŸƒ Re-using the currently running virtualbox VM for "minikube" ...
βŒ› Waiting for SSH access ...
πŸ“Ά "minikube" IP address is 192.168.99.100
🐳 Configuring Docker as the container runtime ...
✨ Preparing Kubernetes environment ...
🚜 Pulling images required by Kubernetes v1.13.4 ...
πŸ”„ Relaunching Kubernetes v1.13.4 using kubeadm ...
βŒ› Waiting for pods: apiserver proxy Β§^C
MacBook-Air-Esbol:My esbolmoldrahmetov$ minikube start
πŸ˜„ minikube v0.35.0 on darwin (amd64)
πŸ’‘ Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
πŸƒ Re-using the currently running virtualbox VM for "minikube" ...
βŒ› Waiting for SSH access ...
πŸ“Ά "minikube" IP address is 192.168.99.100
🐳 Configuring Docker as the container runtime ...
✨ Preparing Kubernetes environment ...
🚜 Pulling images required by Kubernetes v1.13.4 ...
πŸ”„ Relaunching Kubernetes v1.13.4 using kubeadm ...
βŒ› Waiting for pods: apiserver proxyπŸ’£ Error restarting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰ https://github.com/kubernetes/minikube/issues/new

MacBook-Air-Esbol:My esbolmoldrahmetov$ minikube delete
πŸ”₯ Deleting "minikube" from virtualbox ...
πŸ’” The "minikube" cluster has been deleted.

sudo -i
minikube version
minikube version: v0.35.0

MacBook-Air-Esbol:~ root# rm -rf /.minikube/
MacBook-Air-Esbol:
root# exit
logout

MacBook-Air-Esbol:My esbolmoldrahmetov$ minikube start
πŸ˜„ minikube v0.35.0 on darwin (amd64)
πŸ”₯ Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
πŸ“Ά "minikube" IP address is 192.168.99.100
🐳 Configuring Docker as the container runtime ...
✨ Preparing Kubernetes environment ...
🚜 Pulling images required by Kubernetes v1.13.4 ...
πŸš€ Launching Kubernetes v1.13.4 using kubeadm ...
βŒ› Waiting for pods: apiserver proxy etcd scheduler controller addon-manager dns
πŸ”‘ Configuring cluster permissions ...
πŸ€” Verifying component health .....
πŸ’— kubectl is now configured to use "minikube"
πŸ„ Done! Thank you for using minikube!

C:\WINDOWS\system32>minikube start --vm-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch"
o minikube v1.0.0 on windows (amd64)
$ Downloading Kubernetes v1.14.0 images in the background ...
i Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
: Re-using the currently running hyperv VM for "minikube" ...
: Waiting for SSH access ...

  • "minikube" IP address is 192.168.1.35
  • Configuring Docker as the container runtime ...
  • Version of container runtime is 18.06.2-ce
    : Waiting for image downloads to complete ...
  • Preparing Kubernetes environment ...
  • Pulling images required by Kubernetes v1.14.0 ...
    : Relaunching Kubernetes v1.14.0 using kubeadm ...
    : Waiting for pods: apiserver proxy
    ! Error restarting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition
  • Sorry that minikube crashed. If this was unexpected, we would love to hear from you:

I stopped the Hyper-V Manager Service and deleted the minikube

C:\WINDOWS\system32>minikube start --vm-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch"
o minikube v1.0.0 on windows (amd64)
$ Downloading Kubernetes v1.14.0 images in the background ...

Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...

  • "minikube" IP address is 192.168.1.36
  • Configuring Docker as the container runtime ...
  • Version of container runtime is 18.06.2-ce
    : Waiting for image downloads to complete ...
  • Preparing Kubernetes environment ...
  • Pulling images required by Kubernetes v1.14.0 ...
  • Launching Kubernetes v1.14.0 using kubeadm ...
    : Waiting for pods: apiserver proxy etcd scheduler controller dns
  • Configuring cluster permissions ...
  • Verifying component health .....
  • kubectl is now configured to use "minikube"
    = Done! Thank you for using minikube!

FYI i couldn't get it to work either with 'none' or 'virtualbox' but after installing kvm and using that it worked for me

Ubuntu 18.10 - minikube v1.0.0

So! I finally ran into this bug myself in a repeatable fashion. The good news is that this is solvable! I can verify that #4014 fixes this by running within the VM:

kubeadm init phase addon all

Even on older Kubernetes releases. I'll make an effort to get this bug resolved this week in case that PR isn't merged. In the mean time, most everyone should be able to workaround this bug by running:

minikube delete

The fix was released in minikube v1.1.0. If you run into this, please upgrade and let me know if it fixes it!

Hi , running minikube v1.1.0 and ran again into the problem:

Found network options:

  • NO_PROXY=192.168.99.100
  • Configuring environment for Kubernetes v1.14.2 on Docker 18.09.6
    • env NO_PROXY=192.168.99.100
  • Downloading kubeadm v1.14.2
  • Downloading kubelet v1.14.2
  • Relaunching Kubernetes v1.14.2 using kubeadm ...

X Error restarting cluster: waiting for apiserver: timed out waiting for the condition

@sanemoginr That appears to be something else different. Please open a new issue, and be sure to include the output of minikube logs. Thanks!