/sdh-2019

SAP Data Hub on SUSE CaaS and SES5

Primary LanguageShell

SAP Data Hub on SUSE CaaS Platform and SUSE Enterprise Storage 2019 PoC

This project involves the Proof of Concept installation of SAP Data Hub on SUSE CaaS Platform and SUSE Enterprise Storage.

Using version:

  • SUSE CaaSP 3
  • SES 5
  • SLES 12 SP3

This document is currently in a development state. Any comments and additions are welcome. If you need additional information, please contact Pavel Zhukov (pavel.zhukov@suse.com).

Disclaimer
At the moment, no one is responsible if you try to use the information below for productive installations or commercial purposes.

PoC Landscape

The PoC can be deployed in any virtualization environment or on hardware servers. Currently, the PoC is hosted on VMware vSphere.

Requarments

Tech Specs

  • 1 dedicated infrastructure server ( DNS, DHCP, PXE, NTP, NAT, SMT, TFTP, SES admin, a console for SAP Data Hub admin)

    16GB RAM

    1 x HDD - 1TB

    1 LAN adapter

    1 WAN adapter

  • 4 x SES Servers

    16GB RAM

    1 x HDD (System) - 100GB

    3 x HDD (Data) - 1 TB

    1 LAN

  • 5 x CaaSP Nodes

    • 1 x Admin Node

      64 GB RAM

      1 x HDD 100 GB

      1 LAN

    • 1 x Master Node

      64 GB RAM

      1 x HDD 100 GB

      1 LAN

    • 3 x Worker Node

      64 GB RAM

      1 x HDD 100 GB

      1 LAN

Network Architecture

AAll servers connect to the LAN network (isolated from another world). In the current state - 192.168.20.0/24. The infrastructure server also connects to WAN.

Instalation Procedure

Install infrastructure server

1. Install SLES12 SP3

2. Add FQDN to /etc/hosts

Exaple change: 192.168.20.254 master to 192.168.20.254 master.sdh.suse.ru master

3. Configure NTP.

yast2 ntp-client

4. Configure Firewall.

yast2 firewall

5. Configure SMT.

Execute SMT configuration wizard. During the server certificate setup, all possible DNS for this server has been added (SMT FQDN, etc). Add repositories to replication.

sudo zypper in -t pattern smt

for REPO in SLES12-SP3-{Pool,Updates} SUSE-Enterprise-Storage-5-{Pool,Updates} SUSE-CAASP-3.0-{Pool,Updates}; do
  smt-repos $REPO sle-12-x86_64 -e
done

smt-mirror -L /var/log/smt/smt-mirror.log

Download the next distro:

  • SLE-12-SP3-Server-DVD-x86_64-GM-DVD1.iso
  • SUSE-CaaS-Platform-3.0-DVD-x86_64-GM-DVD1.iso

Create install repositories:

mkdir -p /srv/www/htdocs/repo/SUSE/Install/SLE-SERVER/12-SP3
mkdir -p /srv/www/htdocs/repo/SUSE/Install/SUSE-CAASP/3.0

mkdir -p /srv/tftpboot/sle12sp3
mkdir -p /srv/tftpboot/caasp


mount SLE-12-SP3-Server-DVD-x86_64-GM-DVD1.iso /mnt
rsync -avP /mnt/ /srv/www/htdocs/repo/SUSE/Install/SLE-SERVER/12-SP3/x86_64/
cp /mnt/boot/x86_64/loader/{linux,initrd} /srv/tftpboot/sle12sp3/
umount /mnt

mount SUSE-CaaS-Platform-3.0-DVD-x86_64-GM-DVD1.iso /mnt
rsync -avP /mnt/ /srv/www/htdocs/repo/SUSE/Install/SUSE-CAASP/3.0/x86_64/
cp /mnt/boot/x86_64/loader/{linux,initrd} /srv/tftpboot/caasp/
umount /mnt

6. Configure DHCP

yast2 dhcp-server

or uses the next template for /etc/dhcpd.conf, then restart dhcp service.

systemctl restart dhcpd.service

7. Configure TFTP

yast2 tftp-server

copy /srv/tftpboot/* to server.

8. Configure DNS

yast2 dns-server

Configure zone for PoC and all nodes.

Install SES

1. Stop firewall at Infrastructure server at installing SES time.

systemctl stop SuSEfirewall2

2. Configure AutoYast

Put /srv/www/htdocs/autoyast/autoinst_osd.xml to the server.

get AutoYast Fingerprint openssl x509 -noout -fingerprint -sha256 -inform pem -in /srv/www/htdocs/smt.crt Change /srv/www/htdocs/autoyast/autoinst_osd.xml Add to <suse_register> <reg_server>https://smt.sdh.suse.ru</reg_server> <reg_server_cert_fingerprint_type>SHA256</reg_server_cert_fingerprint_type> <reg_server_cert_fingerprint>YOUR SMT FINGERPRINT</reg_server_cert_fingerprint>

3. Install SES Nodes

Boot all SES Node from PXE and chose "Install OSD Node" from PXE boot menu.

4. Configure SES

  1. Start data/ses-install/restart.sh on the infrastructure server.
  2. Run
salt-run state.orch ceph.stage.0
  1. Run
salt-run state.orch ceph.stage.1
  1. Put /srv/pillar/ceph/proposals/policy.cfg on the server.
  2. Run
salt-run state.orch ceph.stage.2

After the command finishes, you can view the pillar data for minions by running:

salt '*' pillar.items
  1. Run
salt-run state.orch ceph.stage.3

If it fails, you need to fix the issue and run the previous stages again. After the command succeeds, run the following to check the status:

ceph -s
  1. Run
salt-run state.orch ceph.stage.4
  1. Add rbd pool (you can use OpenAttic Web interface at infrastructure node)

5. Start firewall at Infrastructure Server

systemctl start SuSEfirewall2

Install SUSE CaaSP

  1. Boot the CaaS admin Node from PXE and chose "Install CaaSP Manually" from the PXE boot menu.
  2. Install the CaaS admin Node using the FQDN of infrastructure server for SMT and NTP parameters.
  3. Get AutoYaST file for the CaaS and put it to /srv/www/htdocs/autoyast/autoinst_caas.xml
wget http://caas-admin.sdh.suse.ru/autoyast
mv autoyast /srv/www/htdocs/autoyast/autoinst_caas.xml
  1. get the AutoYast Fingerprint
openssl x509 -noout -fingerprint -sha256 -inform pem -in /srv/www/htdocs/smt.crt
  1. Change /srv/www/htdocs/autoyast/autoinst_caas.xml Add
  • to <suse_register>

<reg_server>https://smt.sdh.suse.ru</reg_server> <reg_server_cert_fingerprint_type>SHA256</reg_server_cert_fingerprint_type> <reg_server_cert_fingerprint>YOUR SMT FINGERPRINT</reg_server_cert_fingerprint>

  • to <services>

<service>vmtoolsd</service>

  • to <software>

<packages config:type="list"> <packages>open-vm-tools</packages> </packages>

  1. Boot other CaaS Node from PXE and chose "Install CaaSP Node (full automation)" from the PXE boot menu.
  2. Configure CaaS from Velum.
  3. Dashboard Install

helm install --name heapster-default --namespace=kube-system stable/heapster --version=0.2.7 --set rbac.create=true helm list | grep heapster

helm install --name heapster-default --namespace=kube-system stable/heapster --set rbac.create=true

Change Heapster deployment to using 10250 port:

kubectl edit deployment heapster-default-heapster -n kube-system
    spec:
      containers:
      - command:
        - /heapster
        - --source=kubernetes.summary_api:https://kubernetes.default?kubeletPort=10250&kubeletHttps=true&insecure=true
helm install --namespace=kube-system --name=kubernetes-dashboard stable/kubernetes-dashboard --version=0.6.1

Configure SUSE CaaSP and SES integration

Retrieve the Ceph admin secret. Get the key value from the file /etc/ceph/ceph.client.admin.keyring.

On the master node apply the configuration that includes the Ceph secret by using kubectl apply. Replace CEPH_SECRET with your Ceph secret.

tux > kubectl apply -f - << *EOF*
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
type: "kubernetes.io/rbd"
data:
  key: "$(echo CEPH_SECRET | base64)"
*EOF*

Configure nginx-ingress for micro-servises demo

helm install --name nginx-ingress stable/nginx-ingress --namespace kube-system --values nginx-ingress-config-values.yaml

Configure SUSE CaaSP for SAP Data Hub

  1. Add user Using LDIF File to create user. (Use /usr/sbin/slappasswd to generate the password hash.) Retrieve the LDAP admin password. Note the password for later use.
cat /var/lib/misc/infra-secrets/openldap-password

Import the LDAP certificate to your local trusted certificate storage. On the administration node, run:

docker exec -it $(docker ps -q -f name=ldap) cat /etc/openldap/pki/ca.crt > ~/ca.pem
scp ~/ca.pem root@WORKSTATION:/usr/share/pki/trust/anchors/ca-caasp.crt.pem

Replace WORKSTATION with the appropriate hostname for the workstation where you wish to run the LDAP queries. Then, on that workstation, run:

update-ca-certificates
zypper in openldap2
ldapadd -H ldap://ADMINISTRATION_NODE_FQDN:389 -ZZ \
-D cn=admin,dc=infra,dc=caasp,dc=local -w LDAP_ADMIN_PASSWORD -f LDIF_FILE
  1. Add cluster role for this user (Requirements for Installing SAP Data Hub Foundation on Kubernetes)
kubectl create clusterrolebinding vgrachev-cluster-admin-binding --clusterrole=cluster-admin --user=vadim.grachev@sap.com
kubectl auth can-i '*' '*'
  1. Install kubernetes-client and helm from /srv/www/htdocs/repo/SUSE/Updates/SUSE-CAASP/3.0/x86_64/update/x86_64
rpm -Uhv kubernetes-common-1.10.11-4.11.1.x86_64.rpm
rpm -Uhv kubernetes-client-1.10.11-4.11.1.x86_64.rpm
rpm -Uhv helm-2.8.2-3.3.1.x86_64.rpm
helm init --client-only
  1. Install docker

add SLE-Module-Containers12

install docker

  1. Configure local docker registry
zypper install docker-distribution-registry
systemctl enable registry
systemctl start registry

add to /etc/docker/daemon.json next key: "insecure-registries":["master.sdh.suse.ru:5000"] for example for clean file:

{ "insecure-registries":["master.sdh.suse.ru:5000"] }"
usermod -a -G docker vgrachev

https://www.suse.com/documentation/sles-12/book_sles_docker/data/sec_docker_registry_installation.html

  1. Add Storage Class
kubectl create -f rbd_storage.yaml
  1. Add Registry to Velum Add http://master.sdh.suse.ru:5000 to Registry in Velum

  2. Add Role Binding (vsystem-vrep issue)

kubectl create -f clusterrolebinding.yaml 

Test Enviroment

kubectl version
kubectl auth can-i '*' '*'
helm version
ceph status
rbd list
rbd create -s 10 rbd_test
rbd info rbd_test
kubectl apply -f - << *EOF*
 apiVersion: v1
 kind: Pod
 metadata:
   name: rbd-test
 spec:
   containers:
   - name: test-server
     image: nginx
     volumeMounts:
     - mountPath: /mnt/rbdvol
       name: rbdvol
   volumes:
   - name: rbdvol
     rbd:
       monitors:
       - '192.168.20.21:6789'
       - '192.168.20.22:6789'
       - '192.168.20.23:6789'
       pool: rbd
       image: rbd_test
       user: admin
       secretRef:
         name: ceph-secret
       fsType: ext4
       readOnly: false
*EOF*
kubectl get po
kubectl exec -it rbd-test -- df -h
kubectl delete pod rbd-test
rbd rm rbd_test
docker images -- master.sdh.suse.ru:5000/hello-world
docker push master.sdh.suse.ru:5000/hello-world
docker pull hello-world
docker tag docker.io/hello-world master.sdh.suse.ru:5000/hello-world
docker images -- master.sdh.suse.ru:5000/hello-world
docker pull master.sdh.suse.ru:5000/hello-world

Appendix

SUSE Enterprise Storage 5 Documentation

https://www.suse.com/documentation/suse-enterprise-storage-5/

SUSE CaaS Platform 3 Documentation

https://www.suse.com/documentation/suse-caasp-3/index.html

Appendix A

for i in $(kubectl get pods -n sdh | tail -n +2 | cut -f1 -d" "); do echo "$i"; kubectl describe pod $i -n sdh | grep "Image ID:"; done