kubernetes-sigs/cluster-api-provider-nested

DNS resolution not working

yaron2 opened this issue · 5 comments

What steps did you take and what happened:
[A clear and concise description on how to REPRODUCE the bug.]

Trying to connect to pods using Kubernetes Services results in connection refused errors.

What did you expect to happen:

Kubernetes services are working.

Anything else you would like to add:

Installed master branch version of VC on an AKS cluster.

When trying to access Service DNS, the addresses are unreachable.
Here's an example of a kubectl apply -f command that executes a registered webhook validation:

error when creating "nginx.yaml": Internal error occurred: failed calling webhook "vvalidator.kb.io": Post https://k8s-validation-webhooks-service.default.svc:443/validate-app-microsoft-com-v1alpha1-app?timeout=10s: context deadline exceeded

Containers running in a tenant namespace also cannot access DNS based services. They can, however, access Pod IPs directly.

DNS works fine in the super cluster environment.

Environment:

  • cluster-api-provider-nested version: master
  • Minikube/KIND version: AKS
  • Kubernetes version: (use kubectl version): 1.20.7
  • OS (e.g. from /etc/os-release): Ubuntu

/kind bug

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.