/oke-terraform

Required Terraform scripts to set up K8s using OKE in the Oracle Free Tier.

Primary LanguageHCL

Oracle OKE Terraform

This repository contains the Terraform scripts to bootstrap a Kubernetes Cluster in the Oracle Cloud Infrastructure Free Tier with Oracle Kubernetes Engine (OKE).

Requirements

Name Version
kubernetes >= 1.13.0
oci >= 4.96.0

Providers

Name Version
oci 5.3.0

Modules

Name Source Version
oke oracle-terraform-modules/oke/oci n/a

Resources

Name Type
oci_identity_compartment.k8s resource
oci_containerengine_cluster_kube_config.kube_config data source

Inputs

Name Description Type Default Required
oke_version Value of the OKE version string "v1.26.2" no
region Value of the 'home Region' string "sa-saopaulo-1" no
region_oke Value of the 'Region' of the OKE Cluster string "sa-saopaulo-1" no
tenancy_ocid Value of the root Compartment OCID string n/a yes
user_ocid Value of the User OCID string n/a yes
user_rsa_fingerprint Value of the fingerprint of the RSA Public Key string n/a yes
user_rsa_path Value of the path to the RSA Private Key string n/a yes

Outputs

Name Description
bastion_public_ip Public IP address of Bastion host
bastion_service_instance_ocid OCID for the Bastion service
cluster_ocid OCID for the Kubernetes cluster
ig_route_ocid OCID for the route table of the VCN Internet Gateway
internal_lb_nsg_ocid OCID of default NSG that can be associated with the internal load balancer
kubeconfig Convenient command to set KUBECONFIG environment variable before running kubectl locally
nat_route_ocid OCID of route table to NAT Gateway attached to VCN
nodepool_ocids Map of Nodepool names and OCIDs
operator_private_ip Private IP address of Operator host
public_lb_nsg_ocid OCID of default NSG that can be associated with the internal load balancer
ssh_to_bastion Convenient command to SSH to the Bastion host
ssh_to_operator Convenient command to SSH to the Operator host
subnet_ocids Map of subnet OCIDs (worker, int_lb, pub_lb) used by OKE
vcn_ocid OCID of VCN where OKE is created. Use this VCN OCID to add more resources.

Oracle Cloud Infrastructure (OCI) Access

In order to be able to perform operations against OCI, we need to create and import an RSA Key for API signing.

This can be easily performed with the following steps:

  1. Make an .oci directory on your home folder:
$ mkdir ~/.oci
  1. Generate a 2048-bit private key in a PEM format:
$ openssl genrsa -out ~/.oci/oci_api_key.pem 2048
  1. Change permissions, so only you can read and write to the private key file:
$ chmod 600 ~/.oci/oci_api_key.pem
  1. Generate the public key:
$ openssl rsa -pubout -in ~/.oci/oci_api_key.pem -out ~/.oci/oci_api_key_public.pem
$ cat ~/.oci/oci_api_key_public.pem | pbcopy
  1. Add the public key to your OCI user account from User Settings > API Keys

Oracle Cloud Infrastructure (OCI) CLI

We need a correctly configured OCI CLI to log against our to-be-created Kubernetes Cluster, as we will use the K8s login plugin to get a JWT for access.

Instructions on how to install the OCI CLI for different environments can be found here.

Once we have installed the tool, we need to configure it to use the previously generated RSA Key to interact with out OCI Tenancy. In order to do that, we are going to create (or modify if it has been automatically created) the file ~/.oci/config with the following keys:

tenancy=<tenancy_ocid>
user=<user_ocid>
region=<region>
key_file=<user_rsa_path>
fingerprint=<user_rsa_fingerprint>

How to retrieve these values is explained in the Inputs.

Kubernetes Command-Line Tool

In order to interact with our K8s Cluster using the Kubernetes API, we require a Kubernetes CLI; at this point, it's on your choice whether to use install the official CLI from Kubernetes (kubectl) or some other CLI tool as K9s (as I personally use).

  • How to install kubectl in different environments is available in here
  • How to install k9s in different environments is available in here

Usage

First, override all the variables by using a file in the root directory of our Terraform scripts with the defined variables in the Inputs section with the name env.tfvars.

Then, in order to create the cluster, just run the following:

$ terraform apply -var-file="env.tfvars"

Check that everything is correct, and type yes on the required input. In some minutes, the cluster will be ready and a kubeconfig will be placed in the folder generated.

In order to start using this cluster, you can just export the KUBECONFIG environment variable to our current location and use your desired Kubernetes CLI Tool.

$ export KUBECONFIG=$(pwd)/generated/kubeconfig
$ k9s