Canal policy for hostnetwork=true pods?
maikotz opened this issue ยท 10 comments
Not sure if this is the right place to post this, but I'm facing a problem with kubernetes 16 when I try to access pods from the nginx-ingress-controller pod which is running with hostNetwork=true.
Is there a way to still restrict traffic to a pod / endpoint but allow the host network pods to connect?
I tried using namespace selectors but AFAIK hostNetworking=true means the pod is using the network namespace of the host so this doesn't restrict anything but makes it inaccessible.
Does anyone have ideas / better approaches on this?
The only solution I see is maybe through another nginx reverse proxy with manual added rules.
- Calico version: 1.2.1
- Orchestrator version (e.g. kubernetes, mesos, rkt): kubernetes 1.6
- Operating System and version: CoreOS
- Link to your project (optional):
You could create policy that allows traffic from your host's to the pods you want your ingress controller to be able to connect to. You could do this either through allowing the CIDR that your hosts are in (though might let in other host traffic that you want to block) or create a host endpoint for each host (you could just do the hosts running nginx-ingress-controllers) and assign a label to those host endpoints that can then be used in your policy to allow traffic from those labels.
Thanks for your reply @tmjd
I tried labeling the existing endpoints for the pods and use a namespace / podSelector but this didn't work either.
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
name: elasticsearch-kibana-policy
spec:
podSelector:
matchLabels:
netname: elasticsearch-kibana
ingress:
- from:
- podSelector:
matchLabels:
netname: elasticsearch-kibana
ports:
- port: 5601
protocol: TCP
## enable ingress from loadbalancer
- from:
- namespaceSelector:
matchLabels:
role: loadbalancer
- podSelector:
matchLabels:
name: nginx-ingress-lb
ports:
- port: 5601
protocol: TCP
The nginx pods have a label: name: nginx-ingress-lb
.
Still the connections is timing out from the nginx controllers.
I tried creating the endpoints in the namespace and use the NetworkPolicy without the Namespaceselector, as well but without success:
apiVersion: v1
kind: Endpoints
metadata:
labels:
name: nginx-ingress-lb
name: nginx-loadbalancer
subsets:
- addresses:
- ip: 10.10.23.10
nodeName: 10.10.23.10
targetRef:
kind: Pod
name: nginx-ingress-lb-7j5pk
namespace: nginx-ingress
- ip: 10.10.23.100
nodeName: 10.10.23.100
targetRef:
kind: Pod
name: nginx-ingress-lb-3dh83
namespace: nginx-ingress
- ip: 10.10.23.86
nodeName: 10.10.23.86
targetRef:
kind: Pod
name: nginx-ingress-lb-g2441
namespace: nginx-ingress
ports:
- name: https
port: 443
protocol: TCP
- name: http
port: 80
protocol: TCP
Any idea why this wouldn't work?
Sorry I should have provided some links to help.
See http://docs.projectcalico.org/v2.3/reference/calicoctl/resources/hostendpoint and http://docs.projectcalico.org/v2.3/getting-started/bare-metal/bare-metal#creating-host-endpoint-objects.
Keep in mind that the 2nd link is setting up host endpoints to protect the host which is not necessary for what you are trying to achieve.
Here is an example that I was using in a local test cluster. One important thing to note is that the IP addresses in the expectedIPs are the 'tunl0' addresses on my hosts, this is necessary because I have IP-in-IP enabled on my IP pool.
Example Calico Host Endpoint
- apiVersion: v1
kind: hostEndpoint
metadata:
name: hostIPs
node: nonexistant
labels:
environment: production
hosts: k8snodes
calico/k8s_ns: policy-demo
spec:
expectedIPs: ["192.168.151.128", "192.168.154.192"]
Example Kubernetes Network Policy
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
name: access-nginx
namespace: policy-demo
spec:
podSelector:
matchLabels:
run: nginx
ingress:
- from:
- podSelector:
matchLabels:
hosts: k8snodes
Hopefully this helps.
I'm going to close this issue since there has been no response for a while. If this issue is still a problem @maikotz please respond with the current issue or what needs clarification and we can reopen the issue and address it.
@tmjd
the same problem is also happening to me.
I tried hostEndpoint, but it is still not working. Is there any other approach?
@haoyehaoye I see in my example I showed Kubernetes Network Policy, sorry I think that is incorrect. I don't think that will work when using hostEndpoints. I believe you will need to use Calico GlobalNetworkPolicy https://docs.projectcalico.org/v3.2/reference/calicoctl/resources/globalnetworkpolicy. This is because Kubernetes Network Policy is namespaced and the hostEndpoints would not belong to a namespace.
Hi all.
I have the same case and can't found right solution.
I have rgith now 2 namespaces: ingress-nginx
& prod
. Inside first one i have ingress-nginx with hostNetwork: true
that i want to allow to work correctly with NP that restric ingress of pods with labels: api (that is located in namespace=prod) to specific sources.
I use K8s: 1.14.9 and Calico:
Client Version: v3.10.2
Cluster Version: v3.6.2
I have this yml files:
apiVersion: projectcalico.org/v3
kind: HostEndpoint
metadata:
name: k8s-notes
labels:
hosts: k8snodes
spec:
node: nonexistant
expectedIPs:
- {K8s Node1 Public IP}
- {K8s Node2 Public IP}
- {K8s Node3 Public IP}
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: allow-cluster-ips
spec:
selector: hosts == 'k8snodes'
types:
- Ingress
- Egress
ingress:
- action: Allow
egress:
- action: Allow
Calico is new to me so sorry if i miss something small and stupid.
Thanks for spend time
@HristoA Please open a new issue as this one is quite old and against a different version of Calico and the original issue was with Canal.
I'm also not clear on what the problem is you are experiencing.
Hi.
I write it here because my problem is the same as described from origin creator of issue. I found solution thanks to @fasaxc and @irobson
If you're using IPIP, the IP you need in your policy will be the tunnel IP.
I confirm that my case is with ipip_mode=Always
and I just go and check K8s node annotations value of projectcalico.org/IPv4IPIPTunnelAddr=XX.XXX.XXX.XXX
where XX.XXX.XXX.XXX
is IP that i need to add to K8s NP.
I hope this will help in future for the rest that found this issue
Thanks.
Hi.
I write it here because my problem is the same as described from origin creator of issue. I found solution thanks to @fasaxc and @irobson
If you're using IPIP, the IP you need in your policy will be the tunnel IP.
I confirm that my case is withipip_mode=Always
and I just go and check K8s node annotations value ofprojectcalico.org/IPv4IPIPTunnelAddr=XX.XXX.XXX.XXX
whereXX.XXX.XXX.XXX
is IP that i need to add to K8s NP.
I hope this will help in future for the rest that found this issue
Thanks.
Hi.
I found that in some situation(like node restart, calico pod restart), tunnel ip may changed, so I have to modify NP. Is there a way to solve?