-
- 5.1. Usage
- 5.2. Requirements
- 5.3. Providers
- 5.4. Modules
- 5.5. Inputs
- 5.6. Outputs
- 5.7. Conditional creation
- 5.8. Resources
To be able to run this Terraform module, the AWS credentials must be associated with a user
having at least the following AWS managed IAM policies:
AmazonDynamoDBFullAccess
AmazonEC2ContainerRegistryFullAccess
AmazonEC2FullAccess
AmazonRDSFullAccess
AmazonRoute53FullAccess
AmazonS3FullAccess
AmazonSQSFullAccess
AmazonVPCFullAccess
IAMFullAccess
AmazonElasticFileSystemFullAccess
In addition, a new managed policy needs to be created:
e.g: EKS-FULL-CLUSTER-ACCESS
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:*"
],
"Resource": "*"
}
]
}
Refer to the tutorial document for more details about how to set up github actions with AWS using OIDC
This module can be used to deploy an EKS cluster with various components and add-ons. Common deployment examples can be found in examples/
.
This module has the capability to enable/disable
any submodule/components, This means that you can go from just deploying a bare EKS cluster with a certain node groups to deploy RDS databse, S3 buckets and add custom Kubernetes resources and even manage ALB and External DNS.
Have a look at the examples for complete references
module "example" {
source = "./main"
project-prefix = "deep-project"
project-name = "My project"
account-id = "123456789123"
Environment = "Production"
EKS = {
cluster-name = "my-cluster"
k8s-version = "1.22"
network = {
region = "us-east-1"
azs = ["us-east-1a", "us-east-1b"]
vpc = "10.10.0.0/16"
private-subnets = ["10.0.0.0/19", "10.0.32.0/19"]
public-subnets = ["10.0.128.0/20", "10.0.144.0/20"]
}
nodes = [
{
instance-type = "c6a.12xlarge"
min = 1
max = 1
node-count = 1
},
{
instance-type = "t3.small"
min = 1
max = 3
node-count = 2
}
]
}
aws-auth = {
enabled = true
users = [
"my-user",
]
}
RDS = {
enabled = true
db-name = "my_name"
engine = "postgres"
engine-version = 14.1
instance-class = "db.t3.micro"
username = "my_user"
password = "my_password"
allocated-storage = 50
max-allocated-storage = 100
subnets = ["10.0.192.0/21", "10.0.200.0/21"]
enable-backup = true
}
BASTION_HOST = {
enabled = true
name = "bastion"
ingress-cidr-blocks = ["0.0.0.0/0"]
ssh-key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDuMrTqpky5TIL8ltjL47T7SGxamJ8+5JmwUqYt+z5GbF3+WgcHWOCATlslF3FhvMOnUGFfxJrWI5FAo51r7T5m/mpGYPG431SREDkwgx3kLLvqD6sv1OOqmJbW1+5//dMoKab2kqKRyds1QETARjHqk1HTE1cv9gLqpqUqLlYKXDgPZHQtjNmlO2asBZgC5w4Q4tvWWHrNMkiT8LT64V0gA39BzXXnWFSsBuEMwr8oRGhBDxPYG760NAj6SyOIFUaW10ZCDJlWhR76u/K6ULPYn2jKpDeoawZkeLg6B3vjct9Fy13R9EOKdYtWUN9f8k8vz4vb4kkb3jiNcsPzNndEIg60BfNd4RZMq72pca3CNfXUIAZlHZqrrkLXDCTrfz7w+EkSjPfONmE2lY5n8wIufBTB0rUc5qEssvrWmOToXBNlsxAICvHBckPzR9AoBYn4bJr1ki8NrhsHl1xdo9+YZvAR94CQ3qN0sqkVmYABEk2fJFyS0fc7bSRr4CcQSeE= example@example.com"
spec = {
instance-type = "t2.small"
image-ami = "ami-0b5eea76982371e91"
volume-size = 10
volume-type = "gp2"
min-size = 1
max-size = 1
desired-size = 1
}
}
ECR = {
enabled = true
repository-names = [
"api",
"scheduler",
"web",
"worker",
"core"
]
}
S3 = {
enabled = true
bucket-names = [
"data",
]
}
SQS = {
enabled = true
queue-names = [
"example",
]
}
K8S = {
enabled = true
namespace = "my_namespace"
main-sa = "my_sa"
db-secret = "my_db_secret"
runner = {
enabled = true
token = "abcdefghejklmnopqrst123654789"
tag = "my_tag"
}
alb = {
enabled = true
namespace = "kube-system"
sa-name = "alb-sa-iam"
}
dns = {
enabled = true
namespace = "kube-system"
sa-name = "dns-sa-iam"
domain = "example.com"
}
}
}
}
Name | Version |
---|---|
Terraform | >= 0.13.1 |
Name | Version | Required |
---|---|---|
aws | > 4.18.0 | True1 |
kubernetes | 2.5.0 | False2 |
helm | 2.3.0 | False2 |
kubectl | >= 1.7.0 | False2 |
tls | 4.0.1 | False2 |
1 This provider is required to run the Terraform script to create the bare minimum of required
resources.
2 These providers used to create some optional
resources and won't be used unless the required submodules are enabled
.
This Terraform module makes use of custom submodules
as shown in this following table. In their turns each of these submodules can use their own modules from public registries. Please refer to the specification of each submodule to learn more about this.
Name | Source | Required |
---|---|---|
bastion | ./main/modules/bastion | False |
cluster | ./main/modules/cluster | True |
ecr | ./main/modules/ecr | False |
kubernetes | ./main/modules/kubernetes | False |
OpenVPN | ./main/modules/OpenVPN | False |
s3 | ./main/modules/s3 | False |
sqs | ./main/modules/sqs | False |
Name | Description | Type | Default | Required |
---|---|---|---|---|
project-prefix | Project prefix to be used in naming components | string |
null |
no |
project-name | Project name to be used in tagging components | string |
null |
no |
account-id | AWS account ID | string |
null |
no |
Environment | e.g: Staging/Development/Production | string |
null |
no |
EKS | EKS cluster specifications* |
any |
{} |
yes |
RDS | RDS database config* |
map(any) |
{} |
no |
BASTION_HOST | Bastion host config* |
any |
{} |
no |
ECR | ECR repositories config* |
map(any) |
{} |
no |
S3 | Buckets config* |
map(any) |
{} |
no |
SQS | SQS config* |
map(any) |
{} |
no |
K8S | Kubernetes resources config* |
any |
{} |
no |
aws-auth | Add users to be able to access the cluster* |
map(any) |
{} |
no |
*
For more details on how to use and fill these objects, please refer to the documentation of each of their respective modules for full reference on the fields and their significations
Name | Description |
---|---|
cluster-endpoint | EKS cluster endpoint |
cluster-cert-auth | Cluster CA certificate |
cluster-oidc-issuer | Cluster OIDC issuer |
database-endpoint | RDS database endpoint |
vpc-id | VPC ID |
bastion-sec-group-id | Bastion security group id |
bucket-arn | List of the ARNs of all created S3 bucket by this module |
sqs-arn | List of the ARNs of all created SQS queues by this module |
node-role-arn | Node groups ARN |
node-role-arn | Nodegroups role arn |
When enabling
the creation of the bastion host. A security group will be created automatically to enable the bastion host to reach the cluster API server on port 443
. In case of a fully private EKS cluster, this Bastion host will be the only option to use to run kubectl
commands to interact with the custer.
module "example" {
# The rest of arguments are omitted for brevity
BASTION_HOST = {
enabled = true
# Omitted
}
}
When enabling
the creation of an RDS database. Two things will happen under the hood:
- A security group will be created to restrict any request to the database except from the created EKS cluster default security group, this security group will only enable requests from inside the cluster on a ceratin port, eg: 5432 for Postgres
- A Kubernetes
secret
will created in the project namespace that will hold all credentials to connect to the database
module "example" {
# The rest of arguments are omitted for brevity
RDS = {
enabled = true
# Omitted
}
K8S = {
namespace = "my_namespace"
db-secret = "my_db_secret"
# Omitted
}
}
When creating s3 buckets or SQS queues or both. A service account will be created automatically that has all the permission to interact with the created objects. This is done as mentioned in the AWS DOCS. In the example below, we can assume that a service account my_sa
will be created in namespace my_namespace
that uses an on the fly created IAM OIDC provider
to use IAM roles*
for service accounts. Later if you attach my_sa
service account to any pod, that pod will have all the permissions to access the created objects.
module "example" {
# The rest of arguments are omitted for brevity
S3 = {
enabled = true
bucket-names = [
"data",
]
}
SQS = {
enabled = true
queue-names = [
"example",
]
}
K8S = {
namespace = "my_namespace"
main-sa = "my_sa"
}
}
*