Furyctl EKS Demo | IDI2021
This step-by-step tutorial helps you deploy the Kubernetes Fury Distribution on an EKS cluster.
This tutorial covers the following steps:
- Deploy Kubernetes clusters with
furyctl
. - Download the latest version of Fury with
furyctl
. - Install Fury distribution.
- Explore some features of the distribution.
- Teardown the environment.
⚠️ You will be charged to provision the resources used in this tutorial. You should be charged only a few dollars, but we are not responsible for any charges that may incur.❗️ Remember to stop all the instances by following all the steps listed in the teardown phase.
Prerequisites
This tutorial assumes some basic familiarity with Kubernetes. Some experience with Terraform is helpful but not strictly required.
To follow this tutorial, you need:
- Docker - a Docker image containing
furyctl
and all the necessary tools is provided. - OpenVPN Client - Tunnelblick (on macOS) or OpenVPN Connect (for other OS) are recommended.
- AWS Access Credentials
Setup and initialize the environment
-
Open a terminal
-
Run the
fury-getting-started
docker image:
docker run -ti --rm \
-v $PWD:/demo \
registry.sighup.io/delivery/fury-getting-started:0.1.6
- Clone this repository containing all the example code used in this tutorial:
git clone https://github.com/nikever/demo-idi2021-furyctl-eks
- Setup AWS credentials by exporting the following environment variables:
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_DEFAULT_REGION=
- Create an S3 Bucket to hold the Terraform state.
aws s3api create-bucket \
--bucket fury-idi-2021 \
--region $AWS_DEFAULT_REGION \
--create-bucket-configuration LocationConstraint=$AWS_DEFAULT_REGION
You are all set ✌️.
Step 1 - Automatic provisioning of a Kubernetes Clusters
Boostrap provisioning phase
-
Edit the
bootstrap.yml
template located at/demo/infrastructure/aws/bootstrap.yml
-
Execute:
cd /demo/infrastructure/aws
furyctl bootstrap init
furyctl bootstrap apply
- Inspect the output for VPC ID and Private Subnets ID
🚀 The provisioner uses under the hood this Terraform module
Cluster provisioning phase
- Create the
fury-idi-2021-aws.ovpn
OpenVPN credentials file withfuryagent
:
furyagent configure openvpn-client \
--client-name fury \
--config /demo/infrastructure/aws/bootstrap/secrets/furyagent.yml \
> fury-idi-2021-aws.ovpn
- Check that the
fury
user is now listed:
furyagent configure openvpn-client \
--list \
--config /demo/infrastructure/aws/bootstrap/secrets/furyagent.yml
-
Connect to the OpenVPN Server.
-
Edit the
cluster.yml
template located at/demo/infrastructure/aws/cluster.yml
-
Execute:
cd /demo/infrastructure/aws
furyctl cluster init
furyctl cluster apply
- Test the connection:
export KUBECONFIG=/demo/infrastructure/aws/cluster/secrets/kubeconfig
kubectl get nodes
🚀 The provisioner uses under the hood this Terraform module
Step 2 - Download fury modules
- Download the Fury modules with
furyctl
:
cd /demo/
furyctl vendor -H
- Inspect the downloaded modules in the
vendor
folder:
tree -d /demo/vendor -L 3
Step 3 - Installation
cd /demo/manifests/aws/
make apply
# Due to some chicken-egg 🐓🥚 problem with custom resources you have to apply again
make apply
Step 4 - Explore the distribution
Make sure to be connected with the VPN when you interact with each cluster.
Setup local DNS
AWS
- Get the IP address of the internal load balancer:
dig $(kubectl get svc ingress-nginx -n ingress-nginx --no-headers | awk '{print $4}')
Output:
...
;; ANSWER SECTION:
xxx.elb.eu-west-1.amazonaws.com. 77 IN A <FIRST_IP>
xxx.elb.eu-west-1.amazonaws.com. 77 IN A <SECOND_IP>
xxx.elb.eu-west-1.amazonaws.com. 77 IN A <THIRD_IP>
...
- Add the following line to your local
/etc/hosts
:
<FIRST_IP> forecastle.aws.fury cerebro.aws.fury kibana.aws.fury grafana.aws.fury
Play around with the distro
Open the browser and go to <forecastle.aws.fury>
Step 5 - Teardown
Destroy clusters
cd /demo/infrastructure/aws/
furyctl cluster destroy
Destroy loadbalancers and networking resources
- Delete the target groups and loadbalancer associated with the EKS cluster using AWS CLI:
loadbalancer=$(aws resourcegroupstaggingapi get-resources \
--tag-filters Key=kubernetes.io/cluster/fury-aws-demo,Values=owned \
| jq -r ".ResourceTagMappingList[] | .ResourceARN" | grep loadbalancer)
for i in $loadbalancer ; do aws elbv2 delete-load-balancer --load-balancer-arn $i ; done
target_groups=$(aws resourcegroupstaggingapi get-resources \
--tag-filters Key=kubernetes.io/cluster/fury-aws-demo,Values=owned \
| jq -r ".ResourceTagMappingList[] | .ResourceARN" | grep targetgroup)
for tg in $target_groups ; do aws elbv2 delete-target-group --target-group-arn $tg ; done
- Delete networking:
cd /demo/infrastructure/aws/
furyctl bootstrap destroy
- Delete the bucket:
# Delete objects
aws s3api delete-objects \
--bucket fury-idi-2021 \
--delete "$(aws s3api list-object-versions --bucket fury-idi-2021 --query='{Objects: Versions[].{Key:Key,VersionId:VersionId}}')"
# Delete bucket
aws s3api delete-bucket --bucket fury-idi-2021
Conclusions
Congratulations, you made it! 🥳🥳
We hope you enjoyed this tour of Fury!
Issues/Feedback
In case your ran into any problems feel free to open a issue here in GitHub.
Where to go next?
More on-depth tutorials:
More about Fury: