This project is a playground to experiment with Gluster and Kubernetes. It provides a virtual environment using Vagrant (and VirtualBox) to create three storage VMs (building the storage cluster) and a single Kubernetes instance and how to setup Gluster storage and how to consume it from Kubernetes.
vagrant up stor1 stor2 stor3
Add nodes to the Gluster peer:
vagrant ssh stor1
sudo gluster peer probe stor2
sudo gluster peer probe stor3
Setup the Gluster volume using 3 replicas:
sudo gluster volume create gv_first replica 3 \
stor1:/data/glusterfs/lv_first/brick \
stor2:/data/glusterfs/lv_first/brick \
stor3:/data/glusterfs/lv_first/brick
Gluster prints volume create: gv_first: success: please start the volume to access
data
if successful.
Display the volume info:
sudo gluster volume info
Volume Name: gv_first Type: Replicate Volume ID: f624417f-0cdb-4783-842a-f5a69f0f30b9 Status: Created Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: stor1:/data/glusterfs/lv_first/brick Brick2: stor2:/data/glusterfs/lv_first/brick Brick3: stor3:/data/glusterfs/lv_first/brick Options Reconfigured: transport.address-family: inet storage.fips-mode-rchecksum: on nfs.disable: on performance.client-io-threads: off
Start the volume:
sudo gluster volume start gv_first
Gluster prints volume start: gv_first: success
if successful.
Again show the volume info:
sudo gluster volume info
Volume Name: gv_first Type: Replicate Volume ID: f624417f-0cdb-4783-842a-f5a69f0f30b9 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: stor1:/data/glusterfs/lv_first/brick Brick2: stor2:/data/glusterfs/lv_first/brick Brick3: stor3:/data/glusterfs/lv_first/brick Options Reconfigured: transport.address-family: inet storage.fips-mode-rchecksum: on nfs.disable: on performance.client-io-threads: off
Mounting the volume (the server specified in the mount command is used to fetch the cluster configuration and subsequent communication from the client is performed ac cross the whole cluster with fail over):
mount -t glusterfs stor1:/gv_first /mnt
Now play around, put and remove files from the mount point and watch the files coming and going in other bricks.
vagrant ssh kube
Pull the images, bootstrap the master for use with Weave Net,
sudo kubeadm config images pull
sudo kubeadm init --apiserver-advertise-address=192.168.202.245
Copy credentials to the regular user account:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install the Weave Net pod network add-on:
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Taint the (master) node:
kubectl taint nodes --all node-role.kubernetes.io/master-
Following this guide , the prerequisite for mounting Gluster volumes, is to have
the glusterfs-client
package installed on the Kubernetes nodes (already done
via Ansible playbooks).
create the file glusterfs-endpoints.json
in the VM kube
:
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"subsets": [
{
"addresses": [
{
"ip": "192.168.202.201"
}
],
"ports": [
{
"port": 1
}
]
},
{
"addresses": [
{
"ip": "192.168.202.202"
}
],
"ports": [
{
"port": 1
}
]
},
{
"addresses": [
{
"ip": "192.168.202.203"
}
],
"ports": [
{
"port": 1
}
]
}
]
}
Apply the Gluster definition to Kubernetes:
kubectl create -f glusterfs-endpoints.json
Verify the endpoints:
kubectl get endpoints
Create the file glusterfs-service.json
in the VM kube
:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "glusterfs-cluster"
},
"spec": {
"ports": [
{"port": 1}
]
}
}
Create a service for these endpoints, so that they will persist:
kubectl create -f glusterfs-service.json
Create the file demo-pod.yaml
in the VM kube
to demonstrate how to consume a
Gluster volume in a POD:
---
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
spec:
containers:
- name: demo
image: nginx
volumeMounts:
- mountPath: "/mnt/glusterfs"
name: glusterfsvol
volumes:
- name: glusterfsvol
glusterfs:
endpoints: glusterfs-cluster
path: gv_first
readOnly: true
Verify the volume is mounted:
kubectl exec demo-pod -- mount | grep gluster
192.168.202.201:gv_first on /mnt/glusterfs type fuse.glusterfs (ro,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)