This module adds a customer hosted Exocompute cluster configuration to the Rubrik Security Cloud.
There are a few services you'll need in order to get this project off the ground:
- Terraform v1.5.6 or greater
- Rubrik Polaris Provider for Terraform - provides Terraform functions for Rubrik Security Cloud (Polaris)
- Install the AWS CLI - Needed for Terraform to authenticate with AWS
This module is designed to be run from a network where the EKS cluster can be accessed. By default, this module blocks public access to the EKS APIs. When public access is enabled the Rubrik RSC IPs are whitelisted in the EKS cluster. This will not allow Internet access from where terraform
command is being run. Ensure that the aws_exocompute_public_access_admin_cidr
variable is set to allow the network where the terraform
command is being run to access the cluster. Alternatively run this Terraform inside of AWS on a subnet with routing/VPC endpoint access to the EKS API. It is the Kubernetes provider (aka the kubectl
command) that needs network access to the EKS cluster APIs.
# Deploy Exocompute configuration with inputs provided separately.
module "polaris-aws-cloud-native-customer-managed-exocompute" {
source = "rubrikinc/polaris-cloud-native-customer-managed-exocompute/aws"
aws_exocompute_public_subnet_cidr = "172.20.0.0/24"
aws_exocompute_subnet_1_cidr = "172.20.1.0/24"
aws_exocompute_subnet_2_cidr = "172.20.2.0/24"
aws_exocompute_vpc_cidr = "172.20.0.0/16"
aws_eks_worker_node_role_arn = "arn:aws:iam::0123456789ab:role/rubrik-exocompute_eks_workernode-20240116071747815700000002"
aws_iam_cross_account_role_arn = "arn:aws:iam::0123456789ab:role/rubrik-crossaccount-20240116071747824700000003"
cluster_master_role_arn = "arn:aws:iam::0123456789ab:role/rubrik-exocompute_eks_masternode-20240116071747814700000001"
rsc_aws_cnp_account_id = "01234567-89ab-cdef-0123-456789abcdef"
rsc_credentials = "../.creds/customer-service-account.json"
rsc_exocompute_region = "us-east-1"
worker_instance_profile = "rubrik-exocompute_eks_workernode-20240116071750336400000004"
}
# Deploy Exocompute configuration with inputs provided by the polaris-aws-cloud-native module.
module "polaris-aws-cloud-native" {
source = "rubrikinc/polaris-cloud-native/aws"
aws_account_name = "my_aws_account_hosted_exocompute"
aws_account_id = "123456789012"
aws_regions = ["us-west-2","us-east-1"]
rsc_credentials = "../.creds/customer-service-account.json"
rsc_aws_features = [
"CLOUD_NATIVE_PROTECTION",
"RDS_PROTECTION",
"CLOUD_NATIVE_S3_PROTECTION",
"EXOCOMPUTE",
"CLOUD_NATIVE_ARCHIVAL"
]
}
module "polaris-aws-cloud-native-customer-managed-exocompute" {
source = "rubrikinc/polaris-cloud-native-customer-managed-exocompute/aws"
aws_exocompute_public_subnet_cidr = "172.20.0.0/24"
aws_exocompute_subnet_1_cidr = "172.20.1.0/24"
aws_exocompute_subnet_2_cidr = "172.20.2.0/24"
aws_exocompute_vpc_cidr = "172.20.0.0/16"
aws_eks_worker_node_role_arn = module.polaris-aws-cloud-native.aws_eks_worker_node_role_arn
aws_iam_cross_account_role_arn = module.polaris-aws-cloud-native.aws_iam_cross_account_role_arn
cluster_master_role_arn = module.polaris-aws-cloud-native.cluster_master_role_arn
rsc_aws_cnp_account_id = module.polaris-aws-cloud-native.rsc_aws_cnp_account_id
rsc_credentials = "../.creds/customer-service-account.json"
rsc_exocompute_region = "us-east-1"
worker_instance_profile = module.polaris-aws-cloud-native.worker_instance_profile
}
Name | Version |
---|---|
terraform | >=1.5.6 |
aws | ~>5.26.0 |
polaris | =0.8.0-beta.16 |
Name | Version |
---|---|
aws | 5.26.0 |
kubernetes | 2.25.2 |
local | n/a |
polaris | 0.8.0-beta.16 |
time | 0.10.0 |
tls | n/a |
Name | Type |
---|---|
aws_autoscaling_group.cluster | resource |
aws_eks_cluster.rsc_exocompute | resource |
aws_key_pair.worker | resource |
aws_launch_template.worker | resource |
aws_vpc_security_group_ingress_rule.default_eks_cluster_from_control_plane_sg | resource |
aws_vpc_security_group_ingress_rule.default_eks_cluster_from_worker_node_sg | resource |
kubernetes_config_map.aws_auth_configmap | resource |
local_sensitive_file.worker_ssh_private_key | resource |
polaris_aws_exocompute.customer_managed | resource |
polaris_aws_exocompute_cluster_attachment.cluster | resource |
time_sleep.wait_for_exocompute_registration | resource |
time_sleep.wait_for_polaris_sync | resource |
tls_private_key.worker | resource |
aws_eks_cluster_auth.rsc_exocompute | data source |
aws_iam_account_alias.current | data source |
aws_region.current | data source |
aws_ssm_parameter.worker_image | data source |
polaris_deployment.current | data source |
No modules.
Name | Description | Type | Default | Required |
---|---|---|---|---|
autoscaling_max_size | The maximum number of concurrent workers. | number |
64 |
no |
aws_autoscaling_group_name | The name of the autoscaling group for Exocompute. | string |
"Rubrik-Exocompute-Launch-Template-Customer-Managed" |
no |
aws_eks_cluster_name | EKS cluster name. | string |
"Rubrik-Exocompute-Customer-Managed" |
no |
aws_eks_worker_node_role_arn | AWS EKS worker node role name. | string |
n/a | yes |
aws_exocompute_public_access | Enable public access to the Exocompute cluster. | bool |
true |
no |
aws_exocompute_public_access_admin_cidr | Public access admin IP CIDR for the Exocompute cluster. Needed whe running kubectl commands from outside of AWS. Can be blank | list(string) |
[] |
no |
aws_iam_cross_account_role_arn | AWS IAM cross account role name. | string |
n/a | yes |
aws_launch_template_name | The name of the launch template for the worker nodes. | string |
"Rubrik-Exocompute-Launch-Template-Customer-Managed" |
no |
aws_profile | AWS profile name. | string |
n/a | yes |
aws_security_group_control-plane_id | Security group ID for the EKS control plane. | string |
n/a | yes |
aws_security_group_worker-node_id | Security group ID for the EKS worker nodes. | string |
n/a | yes |
cluster_master_role_arn | Cluster master role ARN. | string |
n/a | yes |
kubernetes_version | Kubernetes version. | string |
"1.27" |
no |
rsc_aws_cnp_account_id | Rubrik Security Cloud account ID for the AWS account hosting Exocompute. | string |
n/a | yes |
rsc_credentials | Path to the Rubrik Security Cloud service account file. | string |
n/a | yes |
rsc_exocompute_region | AWS region for the Exocompute cluster. | string |
n/a | yes |
rsc_exocompute_subnet_1_id | Subnet 1 ID for the AWS account hosting Exocompute. | string |
n/a | yes |
rsc_exocompute_subnet_2_id | Subnet 2 ID for the AWS account hosting Exocompute. | string |
n/a | yes |
worker_instance_enable_login | Enable login to worker instances. Generates a key pair and stores it in a local *.pem file. | bool |
false |
no |
worker_instance_node_name | Worker instance node name. | string |
"Rubrik-Exocompute-Customer-Managed-Node" |
no |
worker_instance_profile | Worker instance profile. | string |
n/a | yes |
worker_instance_type | Worker instance type. | string |
"m5.2xlarge" |
no |
Name | Description |
---|---|
cluster_ca_certificate | n/a |
cluster_connection_command | n/a |
cluster_endpoint | n/a |
cluster_name | n/a |
cluster_token | n/a |
worker_ssh_private_key | Output the ssh private key for the worker nodes when local login to nodes is enabled. |
When removing/destroying this module you may encounter the following error:
╷
│ Error: Get "http://localhost/api/v1/namespaces/kube-system/configmaps/aws-auth": dial tcp 127.0.0.1:80: connect: connection refused
│
│ with kubernetes_config_map.aws_auth_configmap,
│ on config_map.tf line 5, in resource "kubernetes_config_map" "aws_auth_configmap":
│ 5: resource "kubernetes_config_map" "aws_auth_configmap" {
│
╵
This is due to a bug in the kubernetes_config_map
resource as described here. To workaround this issue remove the kubernetes_config_map.aws_auth_configmap
resource from the Terraform state file using the following command:
terraform state rm kubernetes_config_map.aws_auth_configmap