Multicast discovery issue in Kubernetes
jrz1977 opened this issue · 3 comments
In kubernetes and other cloud providers, multicast is disabled and usually not possible to enable. IPFIX template discovery in this scenario will fail. Is there an alternative method or plan to implement some kind of unicast discovery protocol?
In kubernetes and other cloud providers, multicast is disabled and usually not possible to enable. IPFIX template discovery in this scenario will fail. Is there an alternative method or plan to implement some kind of unicast discovery protocol?
In k8s you can use different load-balancing algorithms for service, example for kube-router CNI:
annotations:
kube-router.io/service.scheduler: "sh"
All traffic from one device/source will always going to the same pod and you do not need to use template discovery at all.
This could be a
In kubernetes and other cloud providers, multicast is disabled and usually not possible to enable. IPFIX template discovery in this scenario will fail. Is there an alternative method or plan to implement some kind of unicast discovery protocol?
In k8s you can use different load-balancing algorithms for service, example for kube-router CNI:
annotations: kube-router.io/service.scheduler: "sh"
All traffic from one device/source will always going to the same pod and you do not need to use template discovery at all.
This could be a workaround, but not ideal. Pods are ephemeral by nature and when a pod is recreated, until the device sends the template again, the new pod will not be able to process it.
Also sticky traffic is not ideal in enterprise networks, for example there could be 1000s of Wifi routers exporting ipfix.
This could be a
In kubernetes and other cloud providers, multicast is disabled and usually not possible to enable. IPFIX template discovery in this scenario will fail. Is there an alternative method or plan to implement some kind of unicast discovery protocol?
In k8s you can use different load-balancing algorithms for service, example for kube-router CNI:
annotations: kube-router.io/service.scheduler: "sh"
All traffic from one device/source will always going to the same pod and you do not need to use template discovery at all.
This could be a workaround, but not ideal. Pods are ephemeral by nature and when a pod is recreated, until the device sends the template again, the new pod will not be able to process it.
Also sticky traffic is not ideal in enterprise networks, for example there could be 1000s of Wifi routers exporting ipfix.
Yes, after pod re-creation need some time (2-3 minutes) to get templates, that not a big problem, also if you have multiple device (routers, etc) you still can balance traffic between pods (with source hash you just sticky traffic from one router to some pod). Sticky traffic is a must for consistent hash.