bridge: Allow configuring port isolation
Opened this issue · 6 comments
Hello,
podman version 5.0.3
alpine 3.20
I am trying to create podman networks where containers cannot:
- communicate with each other inside their own bridge (L2)
- communicate with containers on other bridges (L3)
Setting the bridge
option isolate: true
solves the second item (L3).
For the first item, I am able to disallow L2 communication by setting the isolated on
(BR_ISOLATED
flag) option on all the bridge ports manually, e.g.:
bridge link set dev veth0 isolated on
bridge link set dev veth1 isolated on
Is there a way to do this automatically, with netavark, as the bridge ports are created? Alternatively, if I am approaching this issue from the wrong end, is there a better way to achieve what I am looking for?
Also, the bridge driver source code references a possible strict
value for the isolate
option, however I am unable to find any documentation as to what this does, exactly. EDIT: It appears to also restrict access to bridges without any isolation set.
I think docker calls this inter container connectivity (icc) so this is definitely something we want to support in order to allow better compatibility.
There isn't really anything pluggable which would allow you to set this automatically right now. So this would need to be implemented first. My thinking is to add a new icc
option and the set the proper netlink attribute to block the connectivity between containers. PRs welcome.
cc @mheon
Thanks for the response!
Is there a way to hook into the network creation lifecycle with a shell script or something similar?
I quickly tried looking at the plugin API, and tried putting a tiny shell script in /usr/local/lib/netavark/test.sh
that just dumps stdin to a file, but I'm not seeing this script being run after setting netavark_plugin_dirs = ["/usr/local/lib/netavark"]
in /etc/containers/containers.conf
.
netavark plugin are specified in https://github.com/containers/netavark/blob/main/plugin-API.md but this isn't really what you want. You would need to completely reimplement the entire bridge code basically and then add your extra change.
There are oci hooks which would be more what you are looking for I think but there you have no relation between interface <-> container so you do not know which veth interface to pick.
That is why I said it is a new feature that has to be implemented first.
Concur with @Luap99 - seems like an eminently reasonable feature request, and not hard to implement, but will have to be in the existing bridge code.
@eirikrye Were you able to get behavior equivalent to com.docker.network.bridge.enable_icc=false
using either a netavark plugin or an OCI hook? If yes, could you please share your code?
Adding the following to my unit files does the job for me.
ExecStartPost=bash -c 'bridge link show master bridgename | cut -d: -f2 | cut -d@ -f1 | xargs -t -i bridge link set dev {} isolated on'
You can set the bridge name using the --interface-name
option of podman network create