contiv/contiv.github.io

Facing issue while creating Pods with infra network

arnabnandy1706 opened this issue · 14 comments

Hello,

When I am creating pods with infra network type in kubernetes. Below is the error I am getting in the logs.

Please help !

Warning  FailedCreatePodSandBox  1s (x3 over 3s)    kubelet, k8s-minion-2.ucsbang6.com  (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8ebcbc869429fd9f27406e3ad0511ed04f1d30ec95567b697677baa43aaec893" network for pod "http-deployment-7fdd7664c5-k2n2v": NetworkPlugin cni failed to set up pod "http-deployment-7fdd7664c5-k2n2v_default" network: Contiv:Error creating EP; Err: ovs operation failed. Error(s): [syntax error(Parsing ovsdb operation 1 of 3 failed: Type mismatch for member 'uuid-name'.)] [github.com/contiv/netplugin/drivers/ovsd.(*OvsdbDriver).performOvsdbOps ovsdbDriver.go 208]

Thanks in advance.

Arnab

Any help on this?

If you can explain your use case, especially why you're using the infra network, I may be able to help.

There are actually several ways to do this depending on the type of networking you're using.
The best way is to expose the pod(s) as a NodePort service. This will work with all networking modes. Note that you will need to create the network as a data network (not infra) and additionally, create an EndpointGroup under that network as well. You can either create a default network and default Endpoint group OR, you can add labels to the pod to specify the network and the EndpointGroup. See Examples 1 and 2 here: https://github.com/contiv/netplugin/tree/master/mgmtfn/k8splugin

Hello Joji,

Thanks for the reply.

As you have mentioned I have followed below the steps.

  1. Created default Network: default-net
  2. Created default EndPointGroup: default-epg
  3. Used the Network and EPG in the deployment file:
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
    io.contiv.tenant: default
    io.contiv.network: default-net
    io.contiv.net-group: default-epg
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
  1. Created NodePort Service:
apiVersion: v1
kind: Service
metadata:
  name: svc-nginx
  labels:
    app: nginx
spec:
  type: NodePort
  ports:
  - port: 80
    nodePort: 30003
    protocol: TCP
  selector:
    app: nginx

Post this I am able to access the nginx pod using the same ip from where I am trying.
e.g.,
If i am doing curl from master node:
curl http://master:30003
I am able to access.

But when I am accessing same URL from different node/server it is unable to access.
NOTE: DNS resolution is working for the nodes.

Where am I wrong ?

Thanks,
Arnab

Any Update on this please ?

I think you need couple more things for this to work:

  1. Set fwd-mode to routing (use netctl global ...)
  2. Create an infra network with the special name "contivh1".
    e.g. netctl net create -n infra -s 132.1.1.0/24 -g 132.1.1.1 contivh1
    This will make all vxlan networks accessible locally from the host.

Hello Joji,

Thanks for the reply.

contivh1 network was already created during the installation. But still the pod is unaccessible from different host.

Thanks,
Arnab

Below is my Global values for Contiv:

{
  "Config": {
    "key": "global",
    "arpMode": "proxy",
    "fwdMode": "routing",
    "name": "global",
    "networkInfraType": "default",
    "pvtSubnet": "172.19.0.0/16",
    "vlans": "1-4094",
    "vxlans": "1-10000"
  },
  "Oper": {
    "clusterMode": "kubernetes",
    "numNetworks": 2,
    "vxlansInUse": "1-2"
  }
}

I just followed the steps you mentioned.

Created a pod using the network, but unfortunately it is still not getting accessed from host.

Please help !

What is the error you get when you try to access? Post that information and that might give clues.

Any update on this issue please ?

Something could be broken here or you might have a routing config issue in your set up. I currently don't have a set up to try and figure out what's going on.