The Alfresco Infrastructure chart aims at bringing in components that will commonly be used by the majority of applications within the Alfresco Digital Business Platform.
This chart bootstraps the creation of a persistent volume and persistent volume claim on a Kubernetes cluster using the Helm package manager.
Beside this it will bring in other shared, common components like the Identity Service. See the Helm chart requirements for the list of additional dependencies brought in.
Check prerequisites section before you start.
Please check the Anaxes Shipyard documentation on running a cluster.
As mentioned as part of the Anaxes Shipyard guidelines, you should deploy into a separate namespace in the cluster to avoid conflicts (create the namespace only if it does not already exist):
export DESIREDNAMESPACE=example
kubectl create namespace $DESIREDNAMESPACE
This environment variable will be used in the deployment steps.
Create an EFS storage on AWS and make sure it is in the same VPC as your cluster. Make sure you open inbound traffic in the security group to allow NFS traffic. Save the name of the server as in this example:
export NFSSERVER=fs-d660549f.efs.us-east-1.amazonaws.com
Then install a nfs client service to create a dynamic storage class in kubernetes. This can be used by multiple deployments.
helm install stable/nfs-client-provisioner \
--name $DESIREDNAMESPACE \
--set nfs.server="$NFSSERVER" \
--set nfs.path="/" \
--set storageClass.reclaimPolicy="Delete" \
--set storageClass.name="$DESIREDNAMESPACE-sc" \
--namespace $DESIREDNAMESPACE
Note! The Persistent volume created with NFS to store the data on the created EFS has the ReclaimPolicy set to Delete. This means that by default, when you delete the release the saved data is deleted automatically.
To change this behaviour and keep the data you can set the storageClass.reclaimPolicy value to Retain.
helm repo add alfresco-incubator https://kubernetes-charts.alfresco.com/incubator
helm repo add alfresco-stable https://kubernetes-charts.alfresco.com/stable
helm repo add codecentric https://codecentric.github.io/helm-charts
helm install alfresco-incubator/alfresco-infrastructure \
--set persistence.storageClass.enabled=true \
--set persistence.storageClass.name="$DESIREDNAMESPACE-sc" \
--namespace $DESIREDNAMESPACE
export INFRARELEASE=enervated-deer
3. Wait for the infrastructure release to get deployed. (When checking status all your pods should be READY 1/1):
helm status $INFRARELEASE
helm delete --purge $INGRESSRELEASE
helm delete --purge $INFRARELEASE
kubectl delete namespace $DESIREDNAMESPACE
For more information on running and tearing down k8s environments, follow this guide.
By default, this chart deploys the nginx-ingress chart with the following configuration that will create an ELB when using AWS and will set a dummy certificate on it:
nginx-ingress:
rbac:
create: true
config:
ssl-redirect: "false"
server-tokens: "false"
controller:
scope:
enabled: true
If you want to customize the certificate type on the ingress level, you can choose one of the options below:
Using a self-signed certificate
If you want your own certificate set on the ELB created through AWS you should create a secret from your cert files
kubectl create secret tls certsecret --key /tmp/tls.key --cert /tmp/tls.crt \
--namespace $DESIREDNAMESPACE
Then deploy the infrastructure chart with the following:
cat <<EOF > infravalues.yaml
#Persistence options
persistence:
#Enables the creation of a persistent volume
enabled: true
storageClass:
enabled: true
name: "$DESIREDNAMESPACE-sc"
#Size allocated to the volume in K8S
baseSize: 20Gi
nginx-ingress:
rbac:
create: true
controller:
config:
ssl-redirect: "false"
server-tokens: "false"
scope:
enabled: true
publishService:
enabled: true
extraArgs:
default-ssl-certificate: $DESIREDNAMESPACE/certsecret
EOF
helm install alfresco-incubator/alfresco-infrastructure \
-f infravalues.yaml \
--namespace $DESIREDNAMESPACE
Using an AWS generated certificate and Amazon Route 53 zone
If you
- created the cluster in AWS using kops
- have a matching SSL/TLS certificate stored in AWS Certificate Manager
- are using a zone in Amazon Route 53
Kubernetes' External DNS can autogenerate a DNS entry for you (a CNAME of the generated ELB) and apply the SSL/TLS certificate to the ELB.
Note: External DNS is currenty in Alpha Version - June 2018
Note: AWS Certificate Manager ARNs are of the form arn:aws:acm:REGION:ACCOUNT:certificate/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
.
Set DOMAIN
to the DNS Zone you used when creating the cluster.
ELB_CNAME="${DESIREDNAMESPACE}.${DOMAIN}"
ELB_CERTIFICATE_ARN=$(aws acm list-certificates | \
jq '.CertificateSummaryList[] | select (.DomainName == "'${DOMAIN}'") | .CertificateArn')
cat <<EOF > infravalues.yaml
#Persistence options
persistence:
#Enables the creation of a persistent volume
enabled: true
storageClass:
enabled: true
name: "$DESIREDNAMESPACE-sc"
#Size allocated to the volume in K8S
baseSize: 20Gi
nginx-ingress:
rbac:
create: true
controller:
config:
ssl-redirect: "false"
server-tokens: "false"
scope:
enabled: true
publishService:
enabled: true
service:
targetPorts:
http: http
https: http
annotations:
external-dns.alpha.kubernetes.io/hostname: ${ELB_CNAME}
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ${ELB_CERTIFICATE_ARN}
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
EOF
helm install alfresco-incubator/alfresco-infrastructure \
-f infravalues.yaml \
--namespace $DESIREDNAMESPACE
For additional information on customizing the nginx-ingress chart please refer to the nginx-ingress chart Readme
Note! Terminating SSL at LB level currently causes invalid redirect issues on identity service level.
The following table lists the configurable parameters of the infrastructure chart and their default values.
Parameter | Description | Default |
---|---|---|
persistence.enabled |
Persistence is enabled for this chart | true |
persistence.baseSize |
Size of the persistent volume. | 20Gi |
persistence.storageClass.enabled |
Use custom Storage Class persistence | false |
persistence.storageClass.name |
Storage Class Name | nfs |
persistence.storageClass.accessModes |
Access modes for the volume | [ReadWriteMany] |
alfresco-infrastructure.activemq.enabled |
Activemq is enabled for this chart | true |
alfresco-infrastructure.alfresco-identity-service.enabled |
Alfresco Identity Service is enabled for this chart | true |
alfresco-infrastructure.nginx-ingress.enabled |
Nginx-ingress is enabled for this chart | true |
Specify each parameter using the --set key=value[,key=value]
argument to helm install
. For example,
$ helm install --name my-release \
--set persistence.enabled=true \
alfresco-incubator/alfresco-infrastructure
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
$ helm install alfresco-incubator/alfresco-infrastructure --name my-release -f values.yaml
Error: "realm-secret" already exists When installing the Infrastructure chart, with the Identity Service enabled, if you recieve the message Error: release <release-name> failed: secrets "realm-secret" already exists there is an existing realm secret in the namespace you are installing. This could mean that you are either installing into a namespace with an existing Identity Service or there is a realm secret leftover from a previous installation of the Identity Service.
If the realm secret is leftover from a previous installation it can be removed with the following
$ kubectl delete secret realm-secret --namespace $DESIREDNAMESPACE