techiescamp/vagrant-kubeadm-kubernetes

Erroring with master: error: resource name may not be empty

hethkar opened this issue · 3 comments

Below are the steps where it errors and exits

master: serviceaccount/admin-user created
master: + cat
master: + kubectl apply -f -
master: secret/admin-user created
master: + cat
master: + kubectl apply -f -
master: clusterrolebinding.rbac.authorization.k8s.io/admin-user created
master: ++ kubectl -n kubernetes-dashboard get sa/admin-user -o 'jsonpath={.secrets[0].name}'
master: + kubectl -n kubernetes-dashboard get secret '' -o 'go-template={{.data.token | base64decode}}'
master: error: resource name may not be empty

@bibinwilson @inkkim thoughts ?

debug logs
` master: serviceaccount/admin-user created
DEBUG ssh: stderr: + cat

  • kubectl apply -f -
    INFO interface: detail: + cat
    INFO interface: detail: master: + cat
    master: + cat
    INFO interface: detail: + kubectl apply -f -
    INFO interface: detail: master: + kubectl apply -f -
    master: + kubectl apply -f -
    DEBUG ssh: Sending SSH keep-alive...
    INFO interface: detail: secret/admin-user created
    INFO interface: detail: master: secret/admin-user created
    master: secret/admin-user created
    DEBUG ssh: stderr: + cat
    INFO interface: detail: + cat
    INFO interface: detail: master: + cat
    master: + cat
    DEBUG ssh: stderr: + kubectl apply -f -
    INFO interface: detail: + kubectl apply -f -
    INFO interface: detail: master: + kubectl apply -f -
    master: + kubectl apply -f -
    INFO interface: detail: clusterrolebinding.rbac.authorization.k8s.io/admin-user created
    INFO interface: detail: master: clusterrolebinding.rbac.authorization.k8s.io/admin-user created
    master: clusterrolebinding.rbac.authorization.k8s.io/admin-user created
    DEBUG ssh: stderr: ++ kubectl -n kubernetes-dashboard get sa/admin-user -o 'jsonpath={.secrets[0].name}'
    INFO interface: detail: ++ kubectl -n kubernetes-dashboard get sa/admin-user -o 'jsonpath={.secrets[0].name}'
    INFO interface: detail: master: ++ kubectl -n kubernetes-dashboard get sa/admin-user -o 'jsonpath={.secrets[0].name}'
    master: ++ kubectl -n kubernetes-dashboard get sa/admin-user -o 'jsonpath={.secrets[0].name}'
    DEBUG ssh: stderr: + kubectl -n kubernetes-dashboard get secret '' -o 'go-template={{.data.token | base64decode}}'
    INFO interface: detail: + kubectl -n kubernetes-dashboard get secret '' -o 'go-template={{.data.token | base64decode}}'
    INFO interface: detail: master: + kubectl -n kubernetes-dashboard get secret '' -o 'go-template={{.data.token | base64decode}}'
    master: + kubectl -n kubernetes-dashboard get secret '' -o 'go-template={{.data.token | base64decode}}'
    DEBUG ssh: stderr: error: resource name may not be empty
    INFO interface: detail: error: resource name may not be empty
    INFO interface: detail: master: error: resource name may not be empty
    master: error: resource name may not be empty
    DEBUG ssh: Exit status: 1
    ERROR warden: Error occurred: The SSH command responded with a non-zero exit status. Vagrant
    assumes that this means the command failed. The output for this command
    should be in the log above. Please read the output to determine what
    went wrong.`

Hi @hethkar ,

I'm trying to fix this issue. Please help verify this PR.

Thanks.

@hethkar Moved the dashboard installation out of the script. The IP allocated for the dashboard service pod has some issues with the calico range.

Fixed