aws/amazon-vpc-cni-k8s

Enhance Multus Thick Client to make vpc-cni mandatory instead of keeping auto

raghs-aws opened this issue · 5 comments

What would you like to be added:

Thick plugin doesn't set the master cni or the default cni, and keeps "multusConfigFile": "auto". This causes Multus to pick other cnis than vpc-cni in some cases , if we have some cni installed like istio-cni.

Recommendation is to not generate the multus-conf file dynamically, but always have
"clusterNetwork": "/etc/cni/net.d/10-aws.conflist",

ex: https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/configuration.md

Why is this needed:
Since this release is only for AWS VPC CNI and Multus with vpc-cni, Multus should be released with vpc cni configuration as the master/default cni.

In Multus thin plugin, the release has "--multus-master-cni-file-name=10-aws.conflist" alongwith --multus-conf-file=auto (reference https://github.com/aws/amazon-vpc-cni-k8s/blob/master/config/multus/v3.9.2-eksbuild.1/aws-k8s-multus.yaml).

Thanks @raghs-aws, we are open to taking a PR to update the manifest. In general, though, customers need to be conscious of components that install unwanted CNIs, as those components can cause other issues.

I was able to spare sometime to look into this and below tests to mimic the resoulution

Test1:

  1. On a worker node mimicked 2 cCNIs by doing cp /etc/cni/net.d/10-aws.conflist /etc/cni/net.d/11-bws.conflist
  2. Edit the multus configuration in config-map as below config by setting multusMasterCNI as 10-aws.conflist
daemon-config.json: |
    {
        "chrootDir": "/hostroot",
        "confDir": "/host/etc/cni/net.d",
        "logFile": "/var/log/multus.log",
        "logLevel": "verbose",
        "socketDir": "/host/run/multus/",
        "cniVersion": "0.3.1",
        "cniConfigDir": "/host/etc/cni/net.d",
        "multusConfigFile": "auto",
        "multusAutoconfigDir": "/host/etc/cni/net.d",
        "multusMasterCNI": "10-aws.conflist"
    }
  1. restarted the multus pod on the particular worker and verified multus-conf and noticed it has picked the configured vpc cni for clusterNetwork and also as MasterCNI.
cat /etc/cni/net.d/00-multus.conf
{"capabilities":{"portMappings":true},"cniVersion":"0.3.1","logFile":"/var/log/multus.log","logLevel":"verbose","name":"multus-cni-network","clusterNetwork":"/host/etc/cni/net.d/10-aws.conflist","type":"multus-shim","socketDir":"/host/run/multus/","multusMasterCNI":"10-aws.conflist"

Test 2:

  1. Edit the multus configuration in config-map as below config by setting multusMasterCNI as 11-bws.conflist
daemon-config.json: |
    {
        "chrootDir": "/hostroot",
        "confDir": "/host/etc/cni/net.d",
        "logFile": "/var/log/multus.log",
        "logLevel": "verbose",
        "socketDir": "/host/run/multus/",
        "cniVersion": "0.3.1",
        "cniConfigDir": "/host/etc/cni/net.d",
        "multusConfigFile": "auto",
        "multusAutoconfigDir": "/host/etc/cni/net.d",
        "multusMasterCNI": "11-bws.conflist"
    }
  1. restarted the multus pod on the particular worker and verified multus-conf and noticed it has picked the configured other cni for clusterNetwork and also as MasterCNI.
cat /etc/cni/net.d/00-multus.conf
{"capabilities":{"portMappings":true},"cniVersion":"0.3.1","logFile":"/var/log/multus.log","logLevel":"verbose","name":"multus-cni-network","clusterNetwork":"/host/etc/cni/net.d/11-bws.conflist","type":"multus-shim","socketDir":"/host/run/multus/","multusMasterCNI":"11-bws.conflist"

Multus functionality was also validated after Test 1 by testing the solution https://github.com/aws-samples/eks-automated-ipmgmt-multus-pods

Thanks for testing @raghs-aws! Feel free to open a PR to update the manifest and we can merge it

Closing now that #2828 has merged

This issue is now closed. Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.