This project involves the Proof of Concept installation of SAP Data Hub on SUSE CaaS Platform and SUSE Enterprise Storage.
Using version:
- SUSE CaaSP 3
- SES 5
- SLES 12 SP3
This document is currently in a development state. Any comments and additions are welcome. If you need additional information, please contact Pavel Zhukov (pavel.zhukov@suse.com).
At the moment, no one is responsible if you try to use the information below for productive installations or commercial purposes.
The PoC can be deployed in any virtualization environment or on hardware servers. Currently, the PoC is hosted on VMware vSphere.
-
1 dedicated infrastructure server ( DNS, DHCP, PXE, NTP, NAT, SMT, TFTP, SES admin, a console for SAP Data Hub admin)
16GB RAM
1 x HDD - 1TB
1 LAN adapter
1 WAN adapter
-
4 x SES Servers
16GB RAM
1 x HDD (System) - 100GB
3 x HDD (Data) - 1 TB
1 LAN
-
5 x CaaSP Nodes
-
1 x Admin Node
64 GB RAM
1 x HDD 100 GB
1 LAN
-
1 x Master Node
64 GB RAM
1 x HDD 100 GB
1 LAN
-
3 x Worker Node
64 GB RAM
1 x HDD 100 GB
1 LAN
-
AAll servers connect to the LAN network (isolated from another world). In the current state - 192.168.20.0/24. The infrastructure server also connects to WAN.
Exaple change: 192.168.20.254 master to 192.168.20.254 master.sdh.suse.ru master
yast2 ntp-client
yast2 firewall
Execute SMT configuration wizard. During the server certificate setup, all possible DNS for this server has been added (SMT FQDN, etc). Add repositories to replication.
sudo zypper in -t pattern smt
for REPO in SLES12-SP3-{Pool,Updates} SUSE-Enterprise-Storage-5-{Pool,Updates} SUSE-CAASP-3.0-{Pool,Updates}; do
smt-repos $REPO sle-12-x86_64 -e
done
smt-mirror -L /var/log/smt/smt-mirror.log
Download the next distro:
- SLE-12-SP3-Server-DVD-x86_64-GM-DVD1.iso
- SUSE-CaaS-Platform-3.0-DVD-x86_64-GM-DVD1.iso
Create install repositories:
mkdir -p /srv/www/htdocs/repo/SUSE/Install/SLE-SERVER/12-SP3
mkdir -p /srv/www/htdocs/repo/SUSE/Install/SUSE-CAASP/3.0
mkdir -p /srv/tftpboot/sle12sp3
mkdir -p /srv/tftpboot/caasp
mount SLE-12-SP3-Server-DVD-x86_64-GM-DVD1.iso /mnt
rsync -avP /mnt/ /srv/www/htdocs/repo/SUSE/Install/SLE-SERVER/12-SP3/x86_64/
cp /mnt/boot/x86_64/loader/{linux,initrd} /srv/tftpboot/sle12sp3/
umount /mnt
mount SUSE-CaaS-Platform-3.0-DVD-x86_64-GM-DVD1.iso /mnt
rsync -avP /mnt/ /srv/www/htdocs/repo/SUSE/Install/SUSE-CAASP/3.0/x86_64/
cp /mnt/boot/x86_64/loader/{linux,initrd} /srv/tftpboot/caasp/
umount /mnt
yast2 dhcp-server
or uses the next template for /etc/dhcpd.conf, then restart dhcp service.
systemctl restart dhcpd.service
yast2 tftp-server
copy /srv/tftpboot/* to server.
yast2 dns-server
Configure zone for PoC and all nodes.
systemctl stop SuSEfirewall2
Put /srv/www/htdocs/autoyast/autoinst_osd.xml to the server.
get AutoYast Fingerprint openssl x509 -noout -fingerprint -sha256 -inform pem -in /srv/www/htdocs/smt.crt Change /srv/www/htdocs/autoyast/autoinst_osd.xml Add to <suse_register> <reg_server>https://smt.sdh.suse.ru</reg_server> <reg_server_cert_fingerprint_type>SHA256</reg_server_cert_fingerprint_type> <reg_server_cert_fingerprint>YOUR SMT FINGERPRINT</reg_server_cert_fingerprint>
Boot all SES Node from PXE and chose "Install OSD Node" from PXE boot menu.
- Start data/ses-install/restart.sh on the infrastructure server.
- Run
salt-run state.orch ceph.stage.0
- Run
salt-run state.orch ceph.stage.1
- Put /srv/pillar/ceph/proposals/policy.cfg on the server.
- Run
salt-run state.orch ceph.stage.2
After the command finishes, you can view the pillar data for minions by running:
salt '*' pillar.items
- Run
salt-run state.orch ceph.stage.3
If it fails, you need to fix the issue and run the previous stages again. After the command succeeds, run the following to check the status:
ceph -s
- Run
salt-run state.orch ceph.stage.4
- Add rbd pool (you can use OpenAttic Web interface at infrastructure node)
systemctl start SuSEfirewall2
- Boot the CaaS admin Node from PXE and chose "Install CaaSP Manually" from the PXE boot menu.
- Install the CaaS admin Node using the FQDN of infrastructure server for SMT and NTP parameters.
- Get AutoYaST file for the CaaS and put it to /srv/www/htdocs/autoyast/autoinst_caas.xml
wget http://caas-admin.sdh.suse.ru/autoyast
mv autoyast /srv/www/htdocs/autoyast/autoinst_caas.xml
- get the AutoYast Fingerprint
openssl x509 -noout -fingerprint -sha256 -inform pem -in /srv/www/htdocs/smt.crt
- Change /srv/www/htdocs/autoyast/autoinst_caas.xml Add
- to
<suse_register>
<reg_server>https://smt.sdh.suse.ru</reg_server> <reg_server_cert_fingerprint_type>SHA256</reg_server_cert_fingerprint_type> <reg_server_cert_fingerprint>YOUR SMT FINGERPRINT</reg_server_cert_fingerprint>
- to
<services>
<service>vmtoolsd</service>
- to
<software>
<packages config:type="list"> <packages>open-vm-tools</packages> </packages>
- Boot other CaaS Node from PXE and chose "Install CaaSP Node (full automation)" from the PXE boot menu.
- Configure CaaS from Velum.
- Dashboard Install
helm install --name heapster-default --namespace=kube-system stable/heapster --version=0.2.7 --set rbac.create=true
helm list | grep heapster
helm install --name heapster-default --namespace=kube-system stable/heapster --set rbac.create=true
Change Heapster deployment to using 10250 port:
kubectl edit deployment heapster-default-heapster -n kube-system
spec:
containers:
- command:
- /heapster
- --source=kubernetes.summary_api:https://kubernetes.default?kubeletPort=10250&kubeletHttps=true&insecure=true
helm install --namespace=kube-system --name=kubernetes-dashboard stable/kubernetes-dashboard --version=0.6.1
Retrieve the Ceph admin secret. Get the key value from the file /etc/ceph/ceph.client.admin.keyring.
On the master node apply the configuration that includes the Ceph secret by using kubectl apply. Replace CEPH_SECRET with your Ceph secret.
tux > kubectl apply -f - << *EOF*
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
type: "kubernetes.io/rbd"
data:
key: "$(echo CEPH_SECRET | base64)"
*EOF*
helm install --name nginx-ingress stable/nginx-ingress --namespace kube-system --values nginx-ingress-config-values.yaml
- Add user Using LDIF File to create user. (Use /usr/sbin/slappasswd to generate the password hash.) Retrieve the LDAP admin password. Note the password for later use.
cat /var/lib/misc/infra-secrets/openldap-password
Import the LDAP certificate to your local trusted certificate storage. On the administration node, run:
docker exec -it $(docker ps -q -f name=ldap) cat /etc/openldap/pki/ca.crt > ~/ca.pem
scp ~/ca.pem root@WORKSTATION:/usr/share/pki/trust/anchors/ca-caasp.crt.pem
Replace WORKSTATION with the appropriate hostname for the workstation where you wish to run the LDAP queries. Then, on that workstation, run:
update-ca-certificates
zypper in openldap2
ldapadd -H ldap://ADMINISTRATION_NODE_FQDN:389 -ZZ \
-D cn=admin,dc=infra,dc=caasp,dc=local -w LDAP_ADMIN_PASSWORD -f LDIF_FILE
- Add cluster role for this user (Requirements for Installing SAP Data Hub Foundation on Kubernetes)
kubectl create clusterrolebinding vgrachev-cluster-admin-binding --clusterrole=cluster-admin --user=vadim.grachev@sap.com
kubectl auth can-i '*' '*'
- Install kubernetes-client and helm from /srv/www/htdocs/repo/SUSE/Updates/SUSE-CAASP/3.0/x86_64/update/x86_64
rpm -Uhv kubernetes-common-1.10.11-4.11.1.x86_64.rpm
rpm -Uhv kubernetes-client-1.10.11-4.11.1.x86_64.rpm
rpm -Uhv helm-2.8.2-3.3.1.x86_64.rpm
helm init --client-only
- Install docker
add SLE-Module-Containers12
install docker
- Configure local docker registry
zypper install docker-distribution-registry
systemctl enable registry
systemctl start registry
add to /etc/docker/daemon.json next key: "insecure-registries":["master.sdh.suse.ru:5000"] for example for clean file:
{ "insecure-registries":["master.sdh.suse.ru:5000"] }"
usermod -a -G docker vgrachev
- Add Storage Class
kubectl create -f rbd_storage.yaml
-
Add Registry to Velum Add http://master.sdh.suse.ru:5000 to Registry in Velum
-
Add Role Binding (vsystem-vrep issue)
kubectl create -f clusterrolebinding.yaml
kubectl version
kubectl auth can-i '*' '*'
helm version
ceph status
rbd list
rbd create -s 10 rbd_test
rbd info rbd_test
kubectl apply -f - << *EOF*
apiVersion: v1
kind: Pod
metadata:
name: rbd-test
spec:
containers:
- name: test-server
image: nginx
volumeMounts:
- mountPath: /mnt/rbdvol
name: rbdvol
volumes:
- name: rbdvol
rbd:
monitors:
- '192.168.20.21:6789'
- '192.168.20.22:6789'
- '192.168.20.23:6789'
pool: rbd
image: rbd_test
user: admin
secretRef:
name: ceph-secret
fsType: ext4
readOnly: false
*EOF*
kubectl get po
kubectl exec -it rbd-test -- df -h
kubectl delete pod rbd-test
rbd rm rbd_test
docker images -- master.sdh.suse.ru:5000/hello-world
docker push master.sdh.suse.ru:5000/hello-world
docker pull hello-world
docker tag docker.io/hello-world master.sdh.suse.ru:5000/hello-world
docker images -- master.sdh.suse.ru:5000/hello-world
docker pull master.sdh.suse.ru:5000/hello-world
https://www.suse.com/documentation/suse-enterprise-storage-5/
https://www.suse.com/documentation/suse-caasp-3/index.html
for i in $(kubectl get pods -n sdh | tail -n +2 | cut -f1 -d" "); do echo "$i"; kubectl describe pod $i -n sdh | grep "Image ID:"; done