Serverless and Pipelines
Pre-requisites
Minikube
minikube start --memory=8192 --cpus=6 --disk-size=50G \
--kubernetes-version=v1.12.0 \
--vm-driver=hyperkit \
--extra-config=apiserver.enable-admission-plugins="LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook"
Important
|
This setup and example works well with minikube v1.1.1, the registry hack below is not working as expected with v1.2.0. |
Enable registry addon
Important
|
Only for minikube |
minikube addons enable registry
Wait for the registry pod to be up
kubectl -n kube-system get pods -w
Note
|
You can terminate the command with CTRL+c |
Clone the minikube-helpers repo
git clone {git-repo}
cd minikube-helpers
Configure registry aliases
To be able to push and pull images from internal registry we need to make the registry entry in minikube node’s hosts file and make them resolvable via coredns.
Add entries to host file
All the registry aliases are configured using the configmap registry-aliases-config.yaml
, we need to create the configmap in kube-system
namespace:
git clone https://github.com/kameshsampath/minikube-helpers
cd registry
kubectl apply -n kube-system -f registry-aliases-config.yaml
Once the ConfigMap has been created we can run the dameonset node-etc-hosts-update.yaml
to make in add entries to the minikube node’s /etc/hosts
file with all aliases pointing to internal registrys' CLUSTER_IP
kubectl apply -n kube-system -f node-etc-hosts-update.yaml
Note
|
|
You can check the minikube vm’s /etc/hosts
file for the registry aliases entries:
$ minikube ssh -- sudo cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 demo
10.111.151.121 dev.local
10.111.151.121 example.com
The above output shows that the daemonset has added the registryAliases
from the ConfigMap pointing to the internal registry’s CLUSTER-IP.
Update coredns
Update the Kubernetes' coredns to have rewrite rules for aliases.
./patch-coredns.sh
A successful patch will have the coredns configmap updated like:
apiVersion: v1
data:
Corefile: |-
.:53 {
errors
health
rewrite name dev.local registry.kube-system.svc.cluster.local
rewrite name example.com registry.kube-system.svc.cluster.local
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
name: coredns
To verify it run the following command:
kubectl get cm -n kube-system coredns -o yaml
Once you have successfully patched you can now push and pull from the registry using suffix dev.local
, example.com
Install Tekton Pipelines
kubectl apply --filename https://storage.googleapis.com/tekton-releases/latest/release.yaml
Wait for the Tekton Pipelines Pods to come up
kubectl get pods --namespace tekton-pipelines -w
Note
|
You can terminate the command with CTRL+c |
Install Knative Serving
Note
|
This section is optional, if and only if you wish to deploy Knative services |
curl -L https://raw.githubusercontent.com/knative/serving/release-0.6/third_party/istio-1.1.3/istio-lean.yaml \
| sed 's/LoadBalancer/NodePort/' \
| kubectl apply --filename -
Wait for the Istio Pods to come up
kubectl get pods --namespace isito-system -w
Note
|
You can terminate the command with CTRL+c |
kubectl apply --selector knative.dev/crd-install=true \
--filename https://github.com/knative/serving/releases/download/v0.6.0/serving.yaml \
--filename https://github.com/knative/serving/releases/download/v0.6.0/serving.yaml --selector networking.knative.dev/certificate-provider!=cert-manager
Wait for the Knative Serving Pods to come up
kubectl get pods --namespace knative-serving -w
Note
|
You can terminate the command with CTRL+c |
Configure Pipelines
As the build need to be run with service account that needs permissions to create resources, a new service account 'build-robot' needs to be created with required permissions.
Download the demo sources and lets call the folder as $PROJECT_HOME
:
git clone https://redhat-developer-demos/quarkus-pipeline-demo &&\
cd quarkus-pipeline-demo &&\
export PROJECT_HOME=`pwd`
Important
|
All the objects will be created in the namespace called demos , if you wish to change it please edit the file build/build-roles.yaml and update the namespace name.
|
kubectl apply -f $PROJECT_HOME/build/build-roles.yaml
Change to the demos
namespace
kubens demos
The build uses resources called PipelineResource that helps to configure the git repo url, the final container image name etc.,
Let’s create the resources
kubectl apply -f $PROJECT_HOME/build/build-resources.yaml
The Pipeline consists of multiple tasks that needs to be executed in order.
Let’s create the pipeline tasks
kubectl apply --recursive -f $PROJECT_HOME/build/tasks
You can use the command tkn task list
to list the created tasks. The command above should show the following tasks:
NAME AGE
greeter-image-from-git 22 seconds ago
kubectl-task 22 seconds ago
Let’s create the pipeline that uses the tasks create in previous step
kubectl apply --recursive -f $PROJECT_HOME/build/pipelines
You can use the command tkn pipeline list
to list the created tasks. The command above should show the following pipeline:
NAME AGE LAST RUN STARTED DURATION STATUS
greeter-pipeline-jvm 5 seconds ago --- --- --- ---
greeter-pipeline-native 5 seconds ago --- --- --- ---
To make the pipeline run, we need to create the PipelineRun
Let’s create the pipelinerun that uses the one of pipelines e.g. greeter-pipeline-jvm created in previous step
kubectl apply -f $PROJECT_HOME/build/pipelinerun/greeter-pipeline-run.yaml
Tip
|
If you want to do a native build then update the |
You can use the command tkn pipelinerun list
to list the created tasks. The command above should show the following pipeline:
NAME STARTED DURATION STATUS
greeter-pipeline-run 8 seconds ago --- Running
You can view the logs of the pipeline run using the command tkn pipelinerun logs -f -a greeter-pipeline-run
Note
|
The very first pipeline run may take sometime, as the builder images needs to be downloaded and the maven cache needs to be warmed |
Tip
|
If you have a local maven repo manager like Nexus then you can configure the pipeline to use it via the param params:
- name: mavenMirrorUrl
value: http://192.168.99.1:8081/nexus/content/groups/public #(1)
|
A successful pipeline run will deploy an application called "greeter" and a correponding service called greeter-service
, you can view them using the following commands:
kubectl get -n demos deployments
kubectl get -n demos services
Deploying Knative Service
To deploy Knative service using the same pipelines, edit the ./build/pipelinerun/greeter-pipeline-run.yaml and update it to look like:
apiVersion: tekton.dev/v1alpha1
apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
name: greeter-pipeline-run
spec:
serviceAccount: build-robot
pipelineRef:
name: greeter-pipeline-jvm
params:
- name: namespace
value: demos
- name: resourceDir #(1)
default: "knative"
- name: resourceFile
default: "service.yaml" #(2)
resources:
- name: source-repo
resourceRef:
name: demo-git-source
- name: app-container-image
resourceRef:
name: greeter-local-image-jvm
-
The kubernetes resource directory
-
The Knative service yaml file
Recreate the pipelines
kubectl delete -f $PROJECT_HOME/build/pipelinerun/greeter-pipeline-run.yaml && \
kubectl apply -f $PROJECT_HOME/build/pipelinerun/greeter-pipeline-run.yaml
Invoke Service
IP_ADDRESS="$(minikube ip):$(kubectl get svc istio-ingressgateway --namespace istio-system --output 'jsonpath={.spec.ports[?(@.port==80)].nodePort}')"
curl -H "host:greeter.demos.example.com" $IP_ADDRESS/greeter
Cleanup
kubectl delete --recursive -f $PROJECT_HOME/build
kubens -