Fury Discovery Day
This step-by-step tutorial helps you deploy the Kubernetes Fury Distribution on a GKE cluster and a EKS cluster.
This tutorial covers the following steps:
- Deploy Kubernetes clusters with
furyctl
. - Download the latest version of Fury with
furyctl
. - Install Fury distribution.
- Explore some features of the distribution.
- Teardown the environment.
⚠️ You will be charged to provision the resources used in this tutorial. You should be charged only a few dollars, but we are not responsible for any charges that may incur.❗️ Remember to stop all the instances by following all the steps listed in the teardown phase.
Prerequisites
This tutorial assumes some basic familiarity with Kubernetes. Some experience with Terraform is helpful but not strictly required.
To follow this tutorial, you need:
- Docker - a Docker image containing
furyctl
and all the necessary tools is provided. - OpenVPN Client - Tunnelblick (on macOS) or OpenVPN Connect (for other OS) are recommended.
- GCP Access Credentials
- AWS Access Credentials
Setup and initialize the environment
-
Open a terminal
-
Run the
fury-getting-started
docker image:
docker run -ti --rm \
-v $PWD:/demo \
registry.sighup.io/delivery/fury-getting-started
- Clone the fury discovery day repository containing all the example code used in this tutorial:
git clone https://github.com/nikever/demo-fury-discovery-day
- Setup your GCP and AWS credentials by exporting the following environment variables:
export GOOGLE_CREDENTIALS= # /path/tp/gcp-service-account.json
export GOOGLE_APPLICATION_CREDENTIALS=$GOOGLE_CREDENTIALS
export GOOGLE_PROJECT=
export GOOGLE_REGION=
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_DEFAULT_REGION=
- Create an S3 Bucket and a GCP Storage Bucket to hold the Terraform state.
You are all set ✌️.
Step 1 - Automatic provisioning of a Kubernetes Clusters
Boostrap provisioning phase
GCP
-
Edit the
bootstrap.yml
template located at/demo/infrastructure/gcp/bootstrap.yml
-
Execute:
cd /demo/infrastructure/gcp
furyctl bootstrap init
furyctl bootstrap apply
AWS
-
Edit the
bootstrap.yml
template located at/demo/infrastructure/aws/bootstrap.yml
-
Execute:
cd /demo/infrastructure/aws
furyctl bootstrap init
furyctl bootstrap apply
- Inspect the output for VPC ID and Private Subnets ID
Cluster provisioning phase
GCP
- Create the
fury-discovery-day-gcp.ovpn
OpenVPN credentials file withfuryagent
:
furyagent configure openvpn-client \
--client-name fury \
--config /demo/infrastructure/gcp/bootstrap/secrets/furyagent.yml \
> fury-discovery-day-gcp.ovpn
- Check that the
fury
user is now listed:
furyagent configure openvpn-client \
--list \
--config /demo/infrastructure/bootstrap/secrets/furyagent.yml
-
Connect to the OpenVPN Server.
-
Edit the
cluster.yml
template located at/demo/infrastructure/gcp/cluster.yml
-
Execute:
cd /demo/infrastructure/gcp
furyctl cluster init
furyctl cluster apply
- Test the connection:
export KUBECONFIG=/demo/infrastructure/gcp/cluster/secrets/kubeconfig
kubectl get nodes
AWS
- Create the
fury-discovery-day-aws.ovpn
OpenVPN credentials file withfuryagent
:
furyagent configure openvpn-client \
--client-name fury \
--config /demo/infrastructure/aws/bootstrap/secrets/furyagent.yml \
> fury-discovery-day-aws.ovpn
- Check that the
fury
user is now listed:
furyagent configure openvpn-client \
--list \
--config /demo/infrastructure/bootstrap/secrets/furyagent.yml
-
Connect to the OpenVPN Server.
-
Edit the
cluster.yml
template located at/demo/infrastructure/aws/cluster.yml
-
Execute:
cd /demo/infrastructure/aws
furyctl cluster init
furyctl cluster apply
- Test the connection:
export KUBECONFIG=/demo/infrastructure/aws/cluster/secrets/kubeconfig
kubectl get nodes
Step 2 - Download fury modules
- Download the Fury modules with
furyctl
:
cd /demo/
furyctl vendor -H
- Inspect the downloaded modules in the
vendor
folder:
tree -d /demo/vendor -L 3
Step 3 - Installation
GCP
cd /demo/manifests/gcp/
make apply
# Due to some chicken-egg 🐓🥚 problem with custom resources you have to apply again
make apply
AWS
cd /demo/manifests/aws/
make apply
# Due to some chicken-egg 🐓🥚 problem with custom resources you have to apply again
make apply
Step 4 - Explore the distribution
Make sure to be connected with the right VPN when you interact with each cluster.
Setup local DNS
AWS
- Get the IP address of the internal load balancer:
dig $(kubectl get svc ingress-nginx -n ingress-nginx --no-headers | awk '{print $4}')
Output:
...
;; ANSWER SECTION:
xxx.elb.eu-west-1.amazonaws.com. 77 IN A <FIRST_IP>
xxx.elb.eu-west-1.amazonaws.com. 77 IN A <SECOND_IP>
xxx.elb.eu-west-1.amazonaws.com. 77 IN A <THIRD_IP>
...
- Add the following line to your local
/etc/hosts
:
<FIRST_IP> forecastle.aws.fury cerebro.aws.fury kibana.aws.fury grafana.aws.fury
GCP
- Get the IP address of the internal load balancer:
kubectl get svc ingress`-nginx -n ingress-nginx --no-headers | awk '{print $4}'
- Add the following line to your local
/etc/hosts
:
<LOADBALANCER_IP> forecastle.gcp.fury cerebro.gcp.fury kibana.gcp.fury grafana.gcp.fury
10.1.0.5 forecastle.gcp.fury cerebro.gcp.fury kibana.gcp.fury grafana.gcp.fury
Play around with the distro
Step 5 - Teardown
Destroy clusters
Make sure to be connected with the right VPN when you interact with each cluster.
GCP
cd /demo/infrastructure/gcp/
furyctl cluster destroy
AWS
cd /demo/infrastructure/aws/
furyctl cluster destroy
Destroy loadbalancers and networking resources
GCP
-
Delete loadbalancer associated with the GKE cluster from GCP console.
-
Delete networking:
cd /demo/infrastructure/gcp/
furyctl bootstrap destroy
AWS
- Delete the target groups and loadbalancer associated with the EKS cluster using AWS CLI:
target_groups=$(aws resourcegroupstaggingapi get-resources \
--tag-filters Key=kubernetes.io/cluster/fury-eks-demo,Values=owned \
| jq -r ".ResourceTagMappingList[] | .ResourceARN" | grep targetgroup)
for tg in $target_groups ; do aws elbv2 delete-target-group --target-group-arn $tg ; done
loadbalancer=$(aws resourcegroupstaggingapi get-resources \
--tag-filters Key=kubernetes.io/cluster/fury-eks-demo,Values=owned \
| jq -r ".ResourceTagMappingList[] | .ResourceARN" | grep loadbalancer)
for i in $loadbalancer ; do aws elbv2 delete-load-balancer --load-balancer-arn $i ; done
- Delete networking:
cd /demo/infrastructure/gcp/
furyctl bootstrap destroy
Conclusions
Congratulations, you made it! 🥳🥳
We hope you enjoyed this tour of Fury!
Issues/Feedback
In case your ran into any problems feel free to open a issue here in GitHub.
Where to go next?
More on-depth tutorials:
More about Fury: