This repository is a simple demonstration of virtualized environments
(vagrant + virtualbox),
tailored to exhibit the kubernetes-based
rancher-server, and rancher-agent ecosystem.
Specifically, a Vagrantfile
will automate the processes contained within
the provided install scripts.
In this setup, the rancher-server is launched using Centos7x, while the rancher-agent is launched in Bionic64. If the rancher-server needs to be debian-based, or corresponding rancher-agents need to be rhel-based, then additional utility scripts, not included in this repository, will need to be created.
Regardless of implementation, when vagrant completes provisioning, a rancher-server, is available via https://localhost:7895, and can be used to manage various kubernetes-based clusters:
Upon immediate login, the rancher-server may still be provisioning:
Note: the provided install scripts, used to provision the corresponding vagrant virtual machine(s), can also be used on production-like environments. However, for high-availability, the install-rancher-server will need to be adjusted.
After a few minutes, the cluster will complete provisioning, and be active:
Then, the newly created cluster can be reviewed:
Additional hosts can be added to a desired cluster, by first clicking Edit under the top-right hamburger icon:
At the bottom of the associated page, the corresponding docker command can be pasted into the desired cluster host:
The kubectl
command can be executed on the rancher-server via the web-browser,
or directly within the container:
[root@rancher-server vagrant]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
92dde0d45453 rancher/rancher:v2.1.5 "entrypoint.sh" 38 hours ago Up 38 hours 0.0.0.0:8890->80/tcp, 0.0 .0.0:8895->443/tcp rancher
[root@rancher-server vagrant]# docker exec -it rancher /bin/bash
root@92dde0d45453:/var/lib/rancher# kubectl run \
--image=nginx \
--port=80 \
--env='DOMAIN=cluster' \
replicas=3
Note: if --env=
is passed, environment variables can be read from STDIN
using the standard env syntax.
Additionally, a yaml configuration file can be utilized:
[root@rancher-server vagrant]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
92dde0d45453 rancher/rancher:v2.1.5 "entrypoint.sh" 38 hours ago Up 38 hours 0.0.0.0:8890->80/tcp, 0.0 .0.0:8895->443/tcp rancher
[root@rancher-server vagrant]# docker exec -it rancher /bin/bash
root@92dde0d45453:/var/lib/rancher# kubectl create -f ./manifest.yaml
root@92dde0d45453:/var/lib/rancher# kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 18s
The following is an example manifest.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
env:
- name: DOMAIN
value: cluster
Note: for development purposes, the kind: Pod
can be implemented if
replicas
are not needed. However, in production systems, the kind: Deployment
is typically desired.
Note: Kompose can convert an existing
docker-compose.yml
to a series of kubernetes yaml files.
Alternatively, instead of defining multiple environment variables for each yaml
file, a custom configMapRef
can be created:
root@92dde0d45453:/var/lib/rancher# cat config/special_config.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
DOMAIN: cluster
root@92dde0d45453:/var/lib/rancher# kubectl create configmap special_config --from-file=config/special_config.yml
Note: the ConfigMap API resource stores configuration data as key-value pairs. The data can be consumed in pods or provide the configurations for system components such as controllers. ConfigMap is similar to Secrets, but provides a means of working with strings that don’t contain sensitive information.
This allows the above manifest.yaml
to refactor as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: special-config
key: DOMAIN
Note: if the envFrom
> key
is omitted, the entire special-config
,
will be available within the given container.
Fork this project in your GitHub account. Then, clone your repository, with one of the following approaches:
- simple clone: clone the remote master branch.
- commit hash: clone the remote master branch, then checkout a specific commit hash.
- release tag: clone the remote branch, associated with the desired release tag.
In order to proceed with the installation for this project, three dependencies need to be installed:
- Vagrant
- Virtualbox x.y.z (or higher)
- Extension Pack x.y.z (required)
Once the necessary dependencies have been installed, execute the following command to build the rancher-server:
cd /path/to/rancher-demonstration/
vagrant up
Note: an alternative syntax to vagrant up
, is to run vagrant up rancher-server
.
However, the associated rancher-agent
needs to be created as well.
Depending on the network speed, the build can take between 2-4 minutes. So, grab a cup of coffee, and perhaps enjoy a danish while the virtual machine builds.
Note: a more complete refresher on virtualization, can be found within the vagrant wiki page.
Though, the implemented install scripts can be used to provision vagrant, it can be run on non-vagrant systems. For example, the install scripts can easily be run on corresponding virtual machines:
## virtual machine for rancher-server
./install-rancher-server \
"$server_version" \
"$server_internal_port" \
"$server_internal_https_port" \
>> "${project_root}/logs/install-rancher-server.txt" 2>&1
## virtual machine for rancher-agent
./install-rancher-agent \
"$agent_version" \
"$server_ip" \
"$server_internal_port" \
"$server_internal_https_port" \
>> "${project_root}/logs/install-rancher-agent.txt" 2>&1
Note: when running the above install scripts, it is assumed that docker, and rancher-cli dependencies have been accounted.