Container Cloud on Equinix Metal private net day0 infrastructure
Using the following instruction, apply the Terraform templates and Ansible playbooks to set up a Mirantis Container Cloud Equinix Metal based management cluster with private networking. During setup, the following resources are created:
- The required amount of VLANs per each Container Cloud installation.
- The router that manages traffic between VLANs for management, regional, and managed clusters.
- The bootstrap (seed) node to bootstrap a management or regional cluster.
To set up Container Cloud on Equinix Metal with private networking
-
Generate an SSH key to access the edge router and bootstrap node:
ssh-keygen -f ssh_key -t ecdsa -b 521
If you want to use the existing or generated key with a different name, provide paths for private and public parts of the key in the
ssh_private_key_path
andssh_public_key_path
variables respectively. -
Optional. To reuse the Equinix key object for other deployments, create and apply the Equinix Metal
project SSH key
, for example, namedmcc_infra_access
:- Log in to the Equinix Metal console.
- Select the project that you want to use for the Container Cloud deployment.
- In the "Project Settings" tab, select "Project SSH Keys" and click "Add New Key".
- Enter the "Key Name" and "Public Key" values and click "Add".
- Inject
ssh_key.pub
as the metadata to the created SSH key. - Declare an additional variable in
terraform.tfvars
:use_existing_ssh_key_name = "mcc_infra_access"
.
Note that
ssh_private_key_path
andssh_public_key_path
should match metadata declared in the "Project SSH key" object. -
Create the
terraform.tfvars
file with all required variables declared invars.tf
.- Specify the amount of VLANs required for Container Cloud as
vlans_amount
in eachmetro
. Ifdeploy_seed
is set totrue
, one of VLANs will be automatically scoped as the management/regional one, and the bootstrap node will be placed on that VLAN. - Use the
terraform plan
command to output help messages for each required variable.
export METAL_AUTH_TOKEN="XXXXXXXX" terraform init terraform plan terraform apply terraform output -json > output.json
Review the following files that are generated using
terraform apply
:output.json
- contains all network specification to provide connectivity for all machines in scope of inter-VLAN operations, both for the edge router and bootstrap node. By default, the bootstrap node will have connectivity to all created VLANs through the edge router.ansible-inventory.yaml
- contains credentials to access the created nodes.
- Specify the amount of VLANs required for Container Cloud as
-
Run the following Ansible playbook that reconciles network configuration for the edge router, bootstrap node packages, and network management:
ansible-lint ansible/private_mcc_infra.yaml ansible-playbook ansible/private_mcc_infra.yaml -vvv
-
Log in to the bootstrap node using the
ubuntu
user name and your specified SSH private key. Credentials and endpoints are located inansible-inventory.yaml
. -
Bootstrap Container Cloud:
-
When the bootstrap completes, adjust the
routers_dhcp
value in themetros
Terraform variable input with the list of IP address(es) of the Ironic DHCP endpoint(s) placed in the management/regional cluster and re-run the following commands:terraform plan terraform apply terraform output -json > output.json ansible-playbook ansible/private_mcc_infra.yaml
To obtain IP addresses themselves from the management/regional cluster:
kubectl --kubeconfig kubeconfig.yaml get machines -o yaml | grep privateIp
-
Optional. Delete the bootstrap node after a successful Container Cloud management/regional bootstrap. Keep
vlans_amount
as is but setdeploy_seed
tofalse
for the relatedmetro
interraform.tfvars
:terraform plan terraform apply terraform output -json > output.json
-
Optional. If you delete the Container Cloud cluster, delete the Terraform template:
terraform destroy