This project allow to allocate mac addresses from a pool to secondary interfaces using Network Plumbing Working Group de-facto standard.
For test environment you can use the development environment
For Production deployment:
Install any supported Network Plumbing Working Group de-facto standard implementation.
For example Multus. To deploy multus on a kubernetes cluster with flannel cni.
kubectl apply -f https://raw.githubusercontent.com/K8sNetworkPlumbingWG/kubemacpool/master/hack/multus/kubernetes-multus.yaml
kubectl apply -f https://raw.githubusercontent.com/K8sNetworkPlumbingWG/kubemacpool/master/hack/multus/multus.yaml
CNI plugins must be installed in the cluster. For CNI plugins you can use the follow command to deploy them inside your cluster.
kubectl apply -f https://raw.githubusercontent.com/K8sNetworkPlumbingWG/kubemacpool/master/hack/cni-plugins/cni-plugins.yaml
Download the project yaml and apply it.
note: default mac range is from 02:00:00:00:00:00 to FD:FF:FF:FF:FF:FF the can be edited in the configmap
wget https://raw.githubusercontent.com/K8sNetworkPlumbingWG/kubemacpool/master/config/release/kubemacpool.yaml
kubectl apply -f ./kubemacpool.yaml
Configmap:
[root]# kubectl -n kubemacpool-system describe configmaps
Name: kubemacpool-mac-range-config
Namespace: kubemacpool-system
Data
====
RANGE_END:
----
FD:FF:FF:FF:FF:FE
RANGE_START:
----
02:00:00:00:00:11
pods:
kubectl -n kubemacpool-system get po
NAME READY STATUS RESTARTS AGE
kubemacpool-mac-controller-manager-6894f7785d-t6hf4 1/1 Running 0 107s
Create a network-attachment-definition:
The 'NetworkAttachmentDefinition' is used to setup the network attachment, i.e. secondary interface for the pod. This is follows the Kubernetes Network Custom Resource Definition De-facto Standard to provide a standardized method by which to specify the configurations for additional network interfaces. This standard is put forward by the Kubernetes Network Plumbing Working Group.
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: ovs-conf
annotations:
k8s.v1.cni.cncf.io/resourceName: ovs-cni.network.kubevirt.io/br1
spec:
config: '{
"cniVersion": "0.3.1",
"name": "ovs-conf",
"plugins" : [
{
"type": "ovs",
"bridge": "br1",
"vlan": 100
},
{
"type": "tuning"
}
]
}'
This example used ovs-cni.
note the tuning plugin change the mac address after the main plugin was executed so network connectivity will not work if the main plugin configure mac filter on the interface.
note the project supports only json configuration for k8s.v1.cni.cncf.io/networks
, network list will be ignored
Create the pod definition:
apiVersion: v1
kind: Pod
metadata:
name: samplepod
annotations:
k8s.v1.cni.cncf.io/networks: '[{ "name": "ovs-conf"}]'
spec:
containers:
- name: samplepod
image: quay.io/schseba/kubemacpool-test:latest
imagePullPolicy: "IfNotPresent"
Check pod deployment:
Name: samplepod
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: node01/192.168.66.101
Start Time: Thu, 14 Feb 2019 13:36:23 +0200
Labels: <none>
Annotations: k8s.v1.cni.cncf.io/networks: [{"name":"ovs-conf","namespace":"default","mac":"02:00:00:00:00:02"}]
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "flannel.1",
"ips": [
"10.244.0.6"
],
"default": true,
"dns": {}
},{
"name": "ovs-conf",
"interface": "net1",
"mac": "02:00:00:00:00:02",
"dns": {}
}]
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"k8s.v1.cni.cncf.io/networks":"[{ \"name\": \"ovs-conf\"}]"},"name":"samplepod"...
....
The networks annotation need to contains now a mac field
k8s.v1.cni.cncf.io/networks: [{"name":"ovs-conf","namespace":"default","mac":"02:00:00:00:00:02"}]
MAC address can be also set manually by the user using the MAC field in the annotation. If the mac is already in used the system will reject it even if the MAC address is outside of the range.
This project uses kubevirtci to deploy local cluster.
Refer to the kubernetes 1.13.3 with multus document
Use following commands to control it.
note: Default Provider is one node (master + worker) of kubernetes 1.13.3 with multus cni plugin.
# Deploy local Kubernetes cluster
export MACPOOL_PROVIDER=k8s-multus-1.13.3 # choose this provider
export MACPOOL_NUM_NODES=3 # master + two nodes
make cluster-up
# SSH to node01 and open interactive shell
./cluster/cli.sh ssh node01
# SSH to node01 and run command
./cluster/cli.sh ssh node01 echo 'Hello World'
# Communicate with the Kubernetes cluster using kubectl
./cluster/kubectl.sh
# Build project, build images, push them to cluster's registry and install them
make cluster-sync
# Destroy the cluster
make cluster-down