This project is a playground to experiment with iSCSI and Kubernetes. It provides a virtual environment using Vagrant (and VirtualBox) to create two VMs (one for storage and one for a single Kubernetes instance) and how to setup iSCSI block storage and how to consume it from Kubernetes.
Bring up the storage VM:
vagrant up storage
Provision storage:
vagrant ssh storage
lsblk # shows sdb
sudo fdisk /dev/sdb # n p ret ret t 8e p w
sudo pvcreate /dev/sdb1
sudo vgcreate vg_iscsi /dev/sdb1
sudo lvcreate -L 1G vg_iscsi
List logical volumes
sudo lvs
vagrant@storage:~$ sudo lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lvol0 vg_iscsi -wi-a----- 1.00g
Thus the new logical volume is present as lvol0
and it is mapped to the device
tree as /dev/mapper/vg_iscsi-lvol0
.
Define the target name for iSCSI:
sudo tgtadm --lld iscsi --op new --mode target --tid 1 -T iqn.2019-12.foo.tld:storage.k8s
View the current configuration:
sudo tgtadm --lld iscsi --op show --mode target
Result:
Target 1: iqn.2019-12.foo.tld:storage.k8s System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No Prevent removal: No Readonly: No SWP: No Thin-provisioning: No Backing store type: null Backing store path: None Backing store flags: Account information: ACL information:
Add logical unit to the target:
sudo tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun 1 -b /dev/mapper/vg_iscsi-lvol0
View the current configuration (again):
sudo tgtadm --lld iscsi --op show --mode target
Result:
Target 1: iqn.2019-12.foo.tld:storage.k8s System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No Prevent removal: No Readonly: No SWP: No Thin-provisioning: No Backing store type: null Backing store path: None Backing store flags: LUN: 1 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 1074 MB, Block size: 512 Online: Yes Removable media: No Prevent removal: No Readonly: No SWP: No Thin-provisioning: No Backing store type: rdwr Backing store path: /dev/mapper/vg_iscsi-lvol0 Backing store flags: Account information: ACL information:
To enable the target to accept any initiators (clients):
sudo tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL
Verify that the target listens on the TCP port 3260:
netstat -tulpn | grep 3260
Follow this guide to consume an iSCSI LUN on Debian. The following steps are all
executed in the VM kube
.
Install open-iscsi
:
sudo apt-get install open-iscsi
Edit the file /etc/iscsi/iscsid.conf
to change the startup type to automatic:
node.startup = automatic
Restart the service:
sudo systemctl restart open-iscsi
Find the LUN:
sudo iscsiadm --mode discovery --type sendtargets --portal 192.168.202.201
Example output:
192.168.202.201:3260,1 iqn.2019-12.foo.tld:storage.k8s
Now in mode node
we need to login to consume the device (note that logging
must also be done, if no authentication is present):
sudo iscsiadm --mode node --targetname iqn.2019-12.foo.tld:storage.k8s \
--portal 192.168.202.201:3260 --login
Example output:
Logging in to [iface: default, target: iqn.2019-12.foo.tld:storage.k8s, portal: 192.168.202.201,3260] (multiple) Login to [iface: default, target: iqn.2019-12.foo.tld:storage.k8s, portal: 192.168.202.201,3260] successful.
The kernel logs some messages about the new block device like this (see
/var/log/syslog
):
scsi 2:0:0:0: Attached scsi generic sg1 type 12 scsi 2:0:0:1: Direct-Access IET VIRTUAL-DISK 0001 PQ: 0 ANSI: 5 sd 2:0:0:1: Attached scsi generic sg2 type 0 sd 2:0:0:1: Power-on or device reset occurred sd 2:0:0:1: [sdb] 2097152 512-byte logical blocks: (1.07 GB/1.00 GiB) sd 2:0:0:1: [sdb] Write Protect is off sd 2:0:0:1: [sdb] Mode Sense: 69 00 10 08 sd 2:0:0:1: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA sd 2:0:0:1: [sdb] Attached SCSI disk iscsid: Connection1:0 to [target: iqn.2019-12.foo.tld:storage.k8s, portal: 192.168.202.201,3260] through [iface: default] is operational now
The new block device is also present via lsblk
and it can be used now:
sudo mkfs.ext4 /dev/sdb
sudo mount /dev/sdb /mnt
cd /mnt
sudo echo hallo | sudo tee -a abc
cat abc
hallo
cd /
sudo umount /mnt
To remove the LUN from the host, use the --logout
operation:
sudo iscsiadm --mode node --targetname iqn.2019-12.foo.tld:storage.k8s \
--portal 192.168.202.201:3260 --logout
Example output:
Logging out of session [sid: 1, target: iqn.2019-12.foo.tld:storage.k8s, portal: 192.168.202.201,3260] Logout of [sid: 1, target: iqn.2019-12.foo.tld:storage.k8s, portal: 192.168.202.201,3260] successful.
The device will no longer show up in lsblk
.
Bring up the Kubernetes machine:
vagrant up kube
Install Kubernetes:
sudo kubeadm config images pull
sudo kubeadm init --apiserver-advertise-address=192.168.202.202
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
kubectl get pods -n kube-system -l name=weave-net
kubectl taint nodes --all node-role.kubernetes.io/master-
The following steps are based on the Kubernetes example for iSCSI Storage.
Install packages and edit /etc/iscsi/iscsid.conf
and change its startup type
to automatic:
sudo apt-get install open-iscsi
sudo vi /etc/iscsi/iscsid.conf
sudo systemctl restart open-iscsi
Create the deployment with a volume mount iscsi.yaml
:
---
apiVersion: v1
kind: Pod
metadata:
name: iscsipd
spec:
containers:
- name: iscsipd-rw
#image: kubernetes/pause
image: busybox
command: ["/bin/sh", "-ec", "sleep 3600"]
volumeMounts:
- mountPath: "/mnt"
name: iscsipd-rw
volumes:
- name: iscsipd-rw
iscsi:
targetPortal: 192.168.202.201:3260
iqn: iqn.2019-12.foo.tld:storage.k8s
lun: 1
fsType: ext4
readOnly: false
In the storage VM, dump the network traffic:
sudo tcpdump -vv -n -i eth1 tcp port 3260
Create:
kubectl create -f iscsi.yaml
Verify (in the container check the /mnt
directory):
kubectl describe pods
kubectl exec -it iscsipd -- /bin/sh