There are different folders and files organized in this way:
- "components" folder: contains some applications of our final system that run on docker;
- "docker" folder: contains docker compose file to run some components and used at the beginning of development;
- "k8s" folder: contains all yaml files needed to run services on a cluster and some script to automate deployment;
- "spark" folder: code of spark;
- "terraform" folder: terraform files for provisioning of a S3 bucket;
- root folder: script for creation and destruction of a EC2 instances.
- Have available in our system the commands: kubectl, kops, terraform, spark-submit;
- Credentials of AWS set on local machine
- go to root folder
- run
./init_cluster_aws.sh
- when asked, set minSize and maxSize to 8 and save it
- wait the end of execution and check that everything worked without errors
- go to k8s folder
- run
./run_k8s_withreplicas.sh
- check that all pods are running with the command
kubectl get pods
- run
kubectl get services
and reach the address of "api-gateway" to show the web page - (optional) send a review by this page, or run the automatic stress client with the command
kubectl apply -f ./client
- (optional) to stop the automatic stress client run
kubectl delete -f ./client
- run
kubectl get services -n monitoring
and reach the address of "grafana" or "prometheus" services to show monitoring pages - when finished run
./stop_k8s.sh
- go to root folder again
- run
./destroy_cluster_aws.sh
The project was initially hosted on GitLab. Docker images are therefore hosted in a private container registry, but can be replicated with the provided docker files (docker build
). For buildin spark images, follow the official guide.