/cma-ssh

CMA SSH Helper API

Primary LanguageGoApache License 2.0Apache-2.0

cma-ssh

Build Status

What is this?

cma-ssh is an operator which manages the lifecycle of Kubernetes clusters (i.e. CnctCluster resources) and machines (CnctMachine).

CMS Developement

Building cma-ssh

There are a few steps you should run to get your development environment set up.

make go1.12.4

Now whenever you want to build you should run.

go1.12.4 build -o cma-ssh cmd/cma-ssh/main.go

If you need to regenerate files you can do it with the following commands.

# the first time you generate files
make generate
# after the first time you can use either
make generate
# or
go1.12.4 generate ./...

If you want to test a clean build (no deps installed) or you love long build times.

make clean-test

Running cma-ssh

The Kubernetes cluster on which the cma-ssh is installed must have network access to a MAAS server. Within the CNCT lab this means you must be in the Seattle office or logged onto the VPN. Additionally you will need to generate an API Key using the MAAS GUI.

To test cma-ssh you can use kind and helm. For example:

kind create cluster
export KUBECONFIG="$(kind get kubeconfig-path --name="1")"

kubectl create clusterrolebinding superpowers --clusterrole=cluster-admin --user=system:serviceaccount:kube-system:default
kubectl create rolebinding superpowers --clusterrole=cluster-admin --user=system:serviceaccount:kube-system:default

helm init

# Set the `maas.apiKey` value for your user.
vi deployments/helm/cma-ssh/values.yaml

helm install --name cma-ssh deployments/helm/cma-ssh/
kubectl get pods --watch

Creating kubernetes clusters with cma-ssh using kubectl

Either kubectl or the Swagger UI REST interface can be used to create Kubernetes clusters with cma-ssh. This section will focus on using kubectl.

A cluster definition consists of three kinds of Kubernetes Custom Resource Definitions (CRDs):

A single cluster definition consists of:

Namespace per cluster

The resources for a single cluster definition must be in the same namespace. You cannot define two clusters in the same namespace, each cluster requires its own namespace.

The code assumes the namespace matches the cluster name.

Example using samples for a cluster named cluster

Create a namespace for the cluster definition resources (match cluster name):

kubectl create namespace cluster

The cluster manifest defines the kubernetes version and cluster name.

The machine manifest defines the controlplane node(s).

The machineset manifest defines the worker node pool(s).

Note: The controlplane nodes should not have labels that match the machineset selector labels.

Copy the resource samples to your cluster dir:

mkdir ~/cluster
cp samples/cluster/cluster_v1alpha1_cluster.yaml ~/cluster/cluster.yaml
cp samples/cluster/cluster_v1alpha1_machine.yaml ~/cluster/machine.yaml
cp samples/cluster/cluster_v1alpah1_machineset.yaml ~/cluster/machineset.yaml

Using kubectl, apply a cluster manifest, and one or more machine manifests to create a kubernetes cluster:

kubectl apply -f ~/cluster/cluster.yaml
kubectl apply -f ~/cluster/machines.yaml
kubectl apply -f ~/cluster/machineset.yaml

How instanceType is mapped to MaaS machine tags

MaaS tags can be used to:

  • select hardware reserved for use by cma-ssh,
  • select hardware for masters or workers, and
  • select hardware for specific workloads (e.g. those requiring GPUs, etc.)

Define MaaS tags on MaaS machines before using cma-ssh

User defined MaaS tags would be assigned to MaaS machines using the MaaS cli or MaaS UI before running cma-ssh. The machine spec instanceType field is used to map a single instanceType string to a MaaS tag. If no MaaS tags have been defined, the instanceType field can be passed in as an empty string so that any MaaS machine will be chosen.

Retrieving the kubeconfig for the cluster

A secret named cluster-private-key is defined in the namespace of the cluster.

To retrieve the kubeconfig:

# If you're using Linux `base64` then use `-d` not `-D`
kubectl get secret cluster-private-key -ojson -n <namespace> | \
  jq -r '.data["kubernetes.kubeconfig"]' | \
  base64 -D > kubeconfig-<clustername>

To use the kubeconfig:

kubectl get nodes --kubeconfig kubeconfig-<clustername>

Deleting the cluster or individual machines

To delete the cluster:

kubectl delete cnctcluster <cluster name> -n <namespace>

To delete a single machine in the cluster:

kubectl delete cnctmachine <machine name> -n <namespace>

Deprecated

The instructions below are deprecated as we move towards a cloud-init approach to configuration instead of ssh.

Overview

The cma-ssh repo provides a helper API for cluster-manager-api by utilizing ssh to interact with virtual machines for kubernetes cluster create, upgrade, add node, and delete.

Getting started

See Protocol Documentation

Requirements

  • Kubernetes 1.10+

Deployment

The default way to deploy CMA-SSH is by the provided helm chart located in the deployment/helm/cma-ssh directory.

install via helm

  1. Locate the private IP of a k8s node that cma-ssh is going to be deployed on and will be used as the install.bootstrapIp.
  2. Locate the nginx proxy used by the airgap environment to be used as the install.airgapProxyIp.
  3. Install helm chart passing in the above values:
    helm install deployments/helm/cma-ssh --name cma-ssh --set install.bootstrapIp="ip from step 1" --set install.airgapProxyIp="ip of step 2"
    *alternatively you can update values.yaml with IPs

Utilizes:

Build

one time setup of tools

  • mac osx: make -f build/Makefile install-tools-darwin

  • linux: make -f build/Makefile install-tools-linux

To generate code and binary:

  • mac osx: make -f build/Makefile darwin

  • linux: make -f build/Makefile linux

CRDs are generated in ./crd RBAC is generated in ./rbac

Helm chart under ./deployments/helm/cma-ssh gets updated with the right CRDs and RBAC

Testing with Azure

Requirements:

Setup steps:

  1. create the ssh key pair (requires rsa and 2048 bit) no password

    ssh-keygen -t rsa -b 2048 -f id_rsa
  2. create args.yml file

    touch .opspec/args.yml

    add inputs:

      subscriptionId: <azure subscription id>
      loginId: <azure service principal id (must have permission to edit user permissions in subscription>
      loginSecret: <azure service principal secret>
      loginTenantId: <azure active directory id>
      sshKeyValue: <path to public key from step 1>
      sshPrivateKey: <path to private key from step 1>
      clusterAccountId: <azure service principal for in cluster resources (ex: load balancer creation)>
      clusterAccountSecret: <azure service principal secret>
      rootPassword: <root password for client vm>
      name: <prefix name to give to all resources> (ex: zaptest01)
    
  3. from root directory of repo run

    opctl run build-azure

    first run takes 10/15 minutes. *this can be run multiple times

  4. to get kubeconfig for central cluster:

    • login to azure via cli:
      az login
    • get kubeconfig from aks cluster:
      az aks get-credentials -n <name> -g <name>-group
      *replace with name from args.yml (step 2)
  5. install bootstrap and connect to proxy:

    helm install deployments/helm/cma-ssh --name cma-ssh \
    --set install.operator=false \
    --set images.bootstrap.tag=0.1.17-local \
    --set install.bootstrapIp=10.240.0.6 \
    --set install.airgapProxyIp=10.240.0.7
    • check bootstrap latest tag at quay.io
    • bootstrapIP is any node private ip (most likely: 10.240.0.4 thru .6)
    • to get airgapProxyIp run:
    az vm show -g <name>-group -n <name>-proxy -d --query publicIps --out tsv
  6. locally start operator

    CMA_BOOTSTRAP_IP=10.240.0.6 CMA_NEXUS_PROXY_IP=10.240.0.7 ./cma-ssh

creating additional azure vm for testing clusters:

  • to create additional vms:
opctl run create-vm
  • this will create a new vm and provide the name/public ip

  • TODO: return private IP also

cleanup azure:

  • TODO: create azure-delete op.

  • currently requires manually deleting resources / resource group manually in the azure portal or cli

  • resource group will be named <name>-group from args.yml file.