-
Install & run Docker.
-
Run a local docker registry.
We want the scenario to run without entering credentials, so use a local registry instead of Dockerhub.
Based on docs/script at: https://kind.sigs.k8s.io/docs/user/local-registry/
Is it running?
docker ps --filter name=local-registry
No?
Run local registry:
docker run -d --restart always -p "127.0.0.1:5000:5000" --name local-registry registry:2
-
Install kind.
-
Create cluster
kind create cluster --name cdktf-app --config kind-config.yaml
-
Save cluster config
FIXME: Or just rely on default kubeconfig location. Will users care?
kubectl config view --raw --context kind-cdktf-app > kubeconfig.yaml
-
Attach local registry to kind's network.
docker network connect kind local-registry
-
Add configmap for registry.
kubectl apply -f local-registry-configmap.yaml --kubeconfig kubeconfig.yaml
-
Install CDKTF libraries, CLI, and constructs (peer dependency)
npm install -g cdktf-cli@latest cdktf@latest constructs@^10.0.0
Note: Final version is in app-final
, or follow these steps:
-
Create & go into the app directory.
mkdir app cd app
-
Initialize CDKTF app (starting from empty
app
directory).cdktf init --template=typescript \ --project-name=learn-terraform-cdktf-applications \ --project-description="Learn how to develop CDKTF applications" \ --local
-
Install kubernetes provider.
npm install @cdktf/provider-kubernetes
-
Add a Construct representing a Kubernetes Deployment.
First, convert some example Terraform config (or read the cdktf/kubernetes provider docs).
cat ../k8s_deployment.tf | cdktf convert
Note: Output needs to be tweaked before incorporating into code:
- Replace
// define resources here
inapp/main.ts
withnew kubernetes.Deployment(this, name + "-deployment", { ... });
.- Change
"myapp"
toname + "-deployment"
^
- Change
- Add import
import * as kubernetes from "@cdktf/provider-kubernetes";
to imports near top of file. - Add import
import * as path from 'path';
to imports near top of file. - Remove extra
[]
frommetadata.labels
,spec.selector.matchLabels
, andspec.template.metadata.labels
inDeployment
config.- Syntax highlighting should show where errors are.
- FIXME: Is there a way to fix/work around this?
- Add k8s provider (above
Deployment
block).new kubernetes.KubernetesProvider(this, "kind", { configPath: path.join(__dirname, '../kubeconfig.yaml'), });
- Replace
-
Synth
cdktf synth
FIXME:
cdktf deploy
always says it's synthesizing. When do we need to runcdktf synth
first vs justcdktf deploy
? -
Deploy
cdktf deploy
-
Show the pod
kubectl get pods --kubeconfig ../kubeconfig.yaml
-
Ask for four replicas in
app/main.tf
, redeploy.// replicas: "1", replicas: "4",
Deploy (will synth as needed):
cdktf deploy
Check (Should show 4 pods now, the new three younger than the first one):
kubectl get pods --kubeconfig ../kubeconfig.yaml
-
Convert the "raw" Deployment to a construct with a nicer interface.
mkdir constructs
Add new file
constructs/kubernetes-web-app.ts
:import { Construct } from "constructs"; import * as kubernetes from "@cdktf/provider-kubernetes"; export interface KubernetesWebAppDeploymentConfig { readonly image: string; readonly replicas: string; readonly appName: string; readonly environment: string; }; export class KubernetesWebAppDeployment extends Construct { public readonly resource: kubernetes.Deployment; constructor(scope: Construct, name: string, config: KubernetesWebAppDeploymentConfig) { super(scope, name); this.resource = new kubernetes.Deployment(this, name, { metadata: [ { labels: { app: config.appName, environment: config.environment, }, name: config.appName, }, ], spec: [ { replicas: config.replicas, selector: [ { matchLabels: { environment: config.environment, app: config.appName }, }, ], template: [ { metadata: [ { labels: { app: config.appName, environment: config.environment }, }, ], spec: [ { container: [ { image: config.image, name: config.appName, }, ], }, ], }, ], }, ], }); } };
And in
constructs/index.ts
.export * from './kubernetes-web-app';
Back in
app/main.ts
, import construct near top of file:import { KubernetesWebAppDeployment, } from './constructs'
And replace
new kubernetes.Deployment(this, name + "-deployment", { ... });
with construct:new KubernetesWebAppDeployment(this, `${name}-deployment`, {image: "nginx:latest", replicas: "2", appName: "myapp", environment: "dev"} );
Deploy:
cdktf deploy
-
Add a test.
First, configure testing in new file
app/jest.setup.js
:const cdktf = require("cdktf"); cdktf.Testing.setupJest();
Create new file
__tests__/kubernetes-web-app-test.ts
.import "cdktf/lib/testing/adapters/jest"; import { Testing } from "cdktf"; import * as kubernetes from "../.gen/providers/kubernetes"; import { KubernetesWebAppDeployment, } from "../constructs"; describe("Our CDKTF Constructs", () => { describe("KubernetesWebAppDeployment", () => { it("should contain a deployment resource", () => { expect( Testing.synthScope((scope) => { new KubernetesWebAppDeployment(scope, "myapp-frontend-dev", { image: "nginx:latest", replicas: "4", appName: "myapp", environment: "dev" }); }) ).toHaveResource(kubernetes.Deployment); }); }); });
Run tests from
app/
directory.npm run test
Now, watch the tests.
npm run test:watch
(Open a new tab to run further commands).
-
Now,
nginx:latest
is runnning in your deployment, but it isn't accessible. Add a kubernetesService
configured as a NodePort to make it available on port 30001. Inapp/constructs/kubernetes-web-app.ts
.Add interface for service:
export interface KubernetesNodePortServiceConfig { readonly port: number; readonly appName: string; readonly environment: string; };
Define the new construct class.
export class KubernetesNodePortService extends Construct { public readonly resource: kubernetes.Service; constructor(scope: Construct, name: string, config: KubernetesNodePortServiceConfig) { super(scope, name); this.resource = new kubernetes.Service(this, name, { metadata: [ { name: config.appName, }, ], spec: [ { type: "NodePort", port: [ { port: 80, targetPort: "80", nodePort: config.port, protocol: "TCP" }, ], selector: { app: config.appName, }, }, ], }); }; };
Now add a test to
app/__tests__/kubernetes-web-app.ts
.Update imports w/ service:
import { KubernetesWebAppDeployment, KubernetesNodePortService, } from "../constructs";
And the test itself:
describe("KubernetesNodePortService", () => { it("should contain a Service resource", () => { expect( Testing.synthScope((scope) => { new KubernetesNodePortService(scope, "myapp-frontend-dev", { appName: "myapp", environment: "dev", port: 30001 }); }) ).toHaveResource(kubernetes.Service); }); });
FIXME: Ideas for other things/ways to test?
Check to make sure the test still pass in
npm run test:watch
command. (2 tests passed)Add service to
app/main.tf
imports:import { KubernetesWebAppDeployment, KubernetesNodePortService, } from './constructs';
And use it right after
KubernetesWebAppDeployment
:new KubernetesNodePortService(this, name + "-service", {port: 30001, appName: "myapp", environment: "dev"} );
Deploy:
cdktf deploy
Visit
localhost:30001
to see nginx hello world page. Might take a minute or two before it's available. -
Refactor constructs into a
SimpleKubernetesWebApp
that includes both components.In
app/constructs/kubernetes-web-app.ts
:export class SimpleKubernetesWebApp extends Construct { public readonly deployment: KubernetesWebAppDeployment; public readonly service: KubernetesNodePortService; public readonly config: KubernetesWebAppDeploymentConfig & KubernetesNodePortServiceConfig; constructor(scope: Construct, name: string, config: KubernetesWebAppDeploymentConfig & KubernetesNodePortServiceConfig) { super(scope, name); this.config = config; this.deployment = new KubernetesWebAppDeployment(this, `${name}-deployment`, {image: config.image, replicas: config.replicas, appName: config.appName, environment: config.environment} ); this.service = new KubernetesNodePortService(this, `${name}-service`, {port: config.port, appName: config.appName, environment: config.environment} ); }};
-
Add a test for
SimpleKubernetesWebApp
.Add import:
import { KubernetesWebAppDeployment, KubernetesNodePortService, SimpleKubernetesWebApp } from "../constructs";
And tests:
describe("SimpleKubernetesWebApp", () => { it("should contain a Service resource", () => { expect( Testing.synthScope((scope) => { new SimpleKubernetesWebApp(scope, "myapp-frontend-dev", { image: "nginx:latest", replicas: "4", appName: "myapp", environment: "dev", port: 30001, }); }) ).toHaveResource(kubernetes.Service); }); }); describe("SimpleKubernetesWebApp", () => { it("should contain a Deployment resource", () => { expect( Testing.synthScope((scope) => { new SimpleKubernetesWebApp(scope, "myapp-frontend-dev", { image: "nginx:latest", replicas: "4", appName: "myapp", environment: "dev", port: 30001, }); }) ).toHaveResource(kubernetes.Deployment); }); });
Now update
app/main.ts
to use new construct instead of seperate ones.import { // KubernetesWebAppDeployment, // KubernetesNodePortService, SimpleKubernetesWebApp, } from './constructs';
And replace the old constructs with the new one:
new SimpleKubernetesWebApp(this, `${name}-frontend`, { image: "nginx:latest", replicas: "2", port: 30001, appName: "myapp-frontend", environment: "dev" });
Note: Bug? Unless we
cdktf destroy
first, we get the following error oncdktf deploy
:[2021-10-04T13:08:41.453] [ERROR] default - ╷ │ Error: Service "myapp-frontend" is invalid: spec.ports[0].nodePort: Invalid value: 30001: provided port is already allocated │ │ with kubernetes_service.app-frontend_app-frontend-service_C4A54401, │ on cdk.tf.json line 108, in resource.kubernetes_service.app-frontend_app-frontend-service_C4A54401: │ 108: } ⠹ Deploying Stack: app
FIXME: If we don't get a fix for the above, workaround is to update the port to 30002.
Watch: (Maybe start this earlier)
cdktf watch --auto-approve
Visit
localhost:30001
to see nginx page. (Or :30002)Add an output, to
constructs/kubernetes-web-app.ts
.Add near top of file:
import { TerraformOutput } from "cdktf";
Add inside SimpleKubernetesWebApp's
constructor({...});
:new TerraformOutput(this, `${name}-frontend-url`, { value: `http://localhost:${config.port}`, });
FIXME: I haven't found a way to get cdktf to print the output more than once - the first time the config is deployed. It isn't output at all with
cdktf watch
, afaict. :( -
Deploy custom image.
Visit
frontend
directory:cd ../frontend
Build:
docker build . -t nocorp-frontend
Tag:
docker tag nocorp-frontend:latest localhost:5000/nocorp-frontend:latest
Push (to local registry):
docker push localhost:5000/nocorp-frontend:latest
FIXME: Automate that^^ ?
Use image in
app/main.ts
:new SimpleKubernetesWebApp(this, `${name}-frontend`, { image: "localhost:5000/nocorp-frontend:latest", // image: "nginx:latest", replicas: "2", port: 30001, appName: "myapp-frontend", environment: "dev" });
Back in
app
directory:cd ../app
Deploy:
cdktf deploy
(http://localhost:30001 - service might take a few seconds to deploy, but should now be terranomo)
-
Build & deploy backend
Visit
backend
directory:cd ../backend
Build:
docker build . -t nocorp-backend
Tag:
docker tag nocorp-backend:latest localhost:5000/nocorp-backend:latest
Push (to local registry):
docker push localhost:5000/nocorp-backend:latest
And add a new "app":
new SimpleKubernetesWebApp(this, `${name}-backend`, { image: "localhost:5000/nocorp-backend:latest", replicas: "1", port: 30002, appName: "myapp-backend", environment: "dev" });
- TODO:
- Deploy frontend/backend that talk to each other
- Fix messy naming strategy
- Notice when new image is deployed, and redeploy app
- Or is this something we should just enable in K8S?
- Deploy another "stack"
- Deploy app on public cloud