Important: You are viewing the master
branch of this repository. This branch is now stable, all new functionality will be implemented in the main
branch. Switching between these branches requires you to adjust your playbooks — detailed instructions are outlined in MIGRATING.md.
If you are not ready to implement these changes just yet, then you can continue to consume the master
branch. However, the default branch of the repository will change from master
to main
. To continue to consume the master
branch, be sure to explicitly reference it when cloning the repository:
git clone -b master https://github.com/IBM/ibm-spectrum-scale-install-infra.git collections
See Installation Instructions for details.
Ansible project with multiple roles for installing and configuring IBM Spectrum Scale (GPFS).
Table of Contents
- Features
- Versions
- Prerequisites
- Installation Instructions
- Optional Role Variables
- Available Roles
- Cluster Membership
- Limitations
- Troubleshooting
- Reporting Issues and Feedback
- Contributing Code
- Disclaimer
- Copyright and License
- Pre-built infrastructure (using a static inventory file)
- Dynamic inventory file
- Support for RHEL 7 on x86_64, PPC64 and PPC64LE
- Support for RHEL 8 on x86_64 and PPC64LE
- Support for UBUNTU 20 on x86_64 and PPC64LE
- Support for SLES 15 on x86_64 and PPC64LE
- Disable SELinux (
scale_prepare_disable_selinux: true
), by default false - Disable firewall (
scale_prepare_disable_firewall: true
), by default true. - Disable firewall ports
- Install and start NTP
- Create /etc/hosts mappings
- Open firewall ports
- Generate SSH key
- User must set up base OS repositories
- Install yum-utils package
- Install gcc-c++, kernel-devel, make
- Install elfutils,elfutils-devel (RHEL8 specific)
- Install core Spectrum Scale packages on Linux nodes
- Install Spectrum Scale license packages on Linux nodes
- Compile or install pre-compiled Linux kernel extension (mmbuildgpl)
- Configure client and server license
- Assign default quorum (maximum 7 quorum nodes) if user has not defined in the inventory
- Assign default manager nodes(all nodes will act as manager node) if user has not defined in the inventory
- Create new cluster (mmcrcluster -N /var/mmfs/tmp/NodeFile -C {{ scale_cluster_clustername }})
- Create cluster with profiles
- Create Cluster with daemon and admin network
- Add new node into existing cluster
- Configure node classes
- Define configuration parameters based on node classes
- Configure NSDs and file system
- Configure NSDs without file system
- Extend NSDs and file system
- Add disks to existing file systems
- Install Spectrum Scale management GUI packages on GUI designated nodes
- maximum 3 management GUI nodes to be configured
- Install performance monitoring sensor packages on all Linux nodes
- Install performance monitoring packages on all GUI designated nodes
- Configure performance monitoring and collectors
- Configure HA federated mode collectors
- Install Spectrum Scale callhome packages on all cluster nodes
- Configure callhome
- Install Spectrum Scale SMB or NFS on selected cluster nodes
- Install Spectrum Scale OBJECT on selected cluster nodes (5.1.1.0)
- CES IPV4 or IPV6 support
- CES interface mode support
The following Ansible versions are tested:
- 2.9 and above
The following IBM Spectrum Scale versions are tested:
- 5.0.4.0
- 5.0.4.1
- 5.0.4.2
- 5.0.5.X
- 5.0.5.2 For CES (SMB and NFS)
- 5.1.0.0
- 5.1.1.0 with Object
Specific OS requirements:
- For CES (SMB/NFS) on SLES15, Python 3 is required.
- For CES (OBJECT) RhedHat 8.x is required.
Users need to have a basic understanding of the Ansible concepts for being able to follow these instructions. Refer to the Ansible User Guide if this is new to you.
-
Install Ansible on any machine (control node)
$ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py $ python get-pip.py $ pip3 install ansible==2.9
Refer to the Ansible Installation Guide for detailed installation instructions.
Note that Python 3 is required for certain functionality of this project to work. Ansible should automatically detect and use Python 3 on managed machines, refer to the Ansible documentation for details and workarounds.
-
Download Spectrum Scale packages
-
A Developer Edition Free Trial is available at this site: https://www.ibm.com/account/reg/us-en/signup?formid=urx-41728
-
Customers who have previously purchased Spectrum Scale can obtain entitled versions from IBM Fix Central. Visit https://www.ibm.com/support/fixcentral and search for 'IBM Spectrum Scale (Software defined storage)'.
-
-
Create password-less SSH keys between all Spectrum Scale nodes in the cluster
A pre-requisite for installing Spectrum Scale is that password-less SSH must be configured among all nodes in the cluster. Password-less SSH must be configured and verified with FQDN, hostname, and IP of every node to every node.
Example:
$ ssh-keygen $ ssh-copy-id -oStrictHostKeyChecking=no node1.gpfs.net $ ssh-copy-id -oStrictHostKeyChecking=no node1 $ ssh-copy-id -oStrictHostKeyChecking=no
Repeat this process for all nodes to themselves and to all other nodes.
-
Create project directory on Ansible control node
The preferred way of accessing the roles provided by this project is by placing them inside the
collections/ansible_collections/ibm/spectrum_scale
directory of your project, adjacent to your Ansible playbook. Simply clone the repository to the correct path:$ mkdir my_project $ cd my_project $ git clone -b master https://github.com/IBM/ibm-spectrum-scale-install-infra.git collections/ansible_collections/ibm/spectrum_scale
Be sure to clone the project under the correct subdirectory:
my_project/ ├── collections/ │ └── ansible_collections/ │ └── ibm/ │ └── spectrum_scale/ │ └── ... ├── hosts └── playbook.yml
-
Alternatives - now deprecated!
Alternatively, you can clone the project repository and create your Ansible playbook inside the repository's directory structure:
$ git clone -b master https://github.com/IBM/ibm-spectrum-scale-install-infra.git $ cd ibm-spectrum-scale-install-infra
Yet another alternative, you can also define an Ansible environment variable to make the roles accessible in any external project directory:
$ export ANSIBLE_ROLES_PATH=$(pwd)/ibm-spectrum-scale-install-infra/roles/
-
-
Create Ansible inventory
Define Spectrum Scale nodes in the Ansible inventory (e.g.
hosts
) in the following format:# hosts: [cluster01] scale01 scale_cluster_quorum=true scale_cluster_manager=true scale02 scale_cluster_quorum=true scale_cluster_manager=true scale03 scale_cluster_quorum=true scale_cluster_manager=false scale04 scale_cluster_quorum=false scale_cluster_manager=false scale05 scale_cluster_quorum=false scale_cluster_manager=false
The above is just a minimal example. It defines Ansible variables directly in the inventory. There are other ways to define variables, such as host variables and group variables.
Numerous variables are available which can be defined in either way to customize the behavior of the roles. Refer to VARIABLES.md for a full list of all supported configuration options.
-
Create Ansible playbook
The basic Ansible playbook (e.g.
playbook.yml
) looks as follows:# playbook.yml: --- - hosts: cluster01 collections: - ibm.spectrum_scale vars: - scale_install_localpkg_path: /path/to/Spectrum_Scale_Standard-5.0.4.0-x86_64-Linux-install roles: - core/precheck - core/node - core/cluster - core/postcheck
Again, this is just a minimal example. There are different installation methods available, each offering a specific set of options:
- Installation from (existing) YUM repository (see samples/playbook_repository.yml)
- Installation from remote installation package (see samples/playbook_remotepkg.yml)
- Installation from local installation package (see samples/playbook_localpkg.yml)
- Installation from single directory package path (see samples/playbook_directory.yml)
Note: Sample playbooks now contain Ansible collection syntax, which requires the project to be cloned into the
collections/ansible_collections/ibm/spectrum_scale/
subdirectory. Alternative samples of playbooks with prior, non-collection syntax can be found in the samples/legacy folder.Refer to VARIABLES.md for a full list of all supported configuration options.
-
Run the playbook to install and configure the Spectrum Scale cluster
-
Using the
ansible-playbook
command:$ ansible-playbook -i hosts playbook.yml
-
Using the automation script:
$ cd samples/ $ ./ansible.sh
Note: An advantage of using the automation script is that it will generate log files based on the date and the time in the
/tmp
directory.
-
-
Playbook execution screen
Playbook execution starts here:
$ ./ansible.sh Running #### ansible-playbook -i hosts playbook.yml PLAY #### [cluster01] ********************************************************************************************************** TASK #### [Gathering Facts] ********************************************************************************************************** ok: [scale01] ok: [scale02] ok: [scale03] ok: [scale04] ok: [scale05] TASK [common : check | Check Spectrum Scale version] ********************************************************************************************************* ok: [scale01] ok: [scale02] ok: [scale03] ok: [scale04] ok: [scale05] ...
Playbook recap:
#### PLAY RECAP *************************************************************************************************************** scale01 : ok=0 changed=65 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 scale02 : ok=0 changed=59 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 scale03 : ok=0 changed=59 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 scale04 : ok=0 changed=59 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 scale05 : ok=0 changed=59 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Users can define variables to override default values and customize behavior of the roles. Refer to VARIABLES.md for a full list of all supported configuration options.
Additional functionality can be enabled by defining further variables. Browse the examples in the samples/ directory to learn how to:
- Configure storage and file systems (see samples/playbook_storage.yml)
- Configure node classes and Spectrum Scale configuration attributes (see samples/playbook_nodeclass.yml)
- Deploy Spectrum Scale using JSON inventory (see samples/playbook_json_ces.yml)
The following roles are available for you to reuse when assembling your own playbook:
Note that Core GPFS is the only mandatory role, all other roles are optional. Each of the optional roles requires additional configuration variables. Browse the examples in the samples/ directory to learn how to:
- Configure Protocol Services (SMB & NFS) (see samples/playbook_ces.yml)
- Configure Protocol Services (HDFS) (see samples/playbook_ces_hdfs.yml)
- Configure Protocol Services (OBJECT) (see samples/playbook_ces_object.yml)
- Configure Call Home (see samples/playbook_callhome.yml)
- Configure File Audit Logging (see samples/playbook_fileauditlogging.yml)
- Configure cluster with daemon and admin network (see samples/daemon_admin_network)
- Configure remotely mounted filesystems (see samples/playbook_remote_mount.yml)
Note: Sample playbooks now contain Ansible collection syntax, which requires the project to be cloned into the
collections/ansible_collections/ibm/spectrum_scale/
subdirectory. Alternative samples of playbooks with prior, non-collection syntax can be found in the samples/legacy folder.
All hosts in the play are configured as nodes in the same Spectrum Scale cluster. If you want to add hosts to an existing cluster then add at least one node from that existing cluster to the play.
You can create multiple clusters by running multiple plays. Note that you will need to reload the inventory to clear dynamic groups added by the Spectrum Scale roles:
- name: Create one cluster
hosts: cluster01
roles:
...
- name: Refresh inventory to clear dynamic groups
hosts: localhost
connection: local
gather_facts: false
tasks:
- meta: refresh_inventory
- name: Create another cluster
hosts: cluster02
roles:
...
The roles in this project can (currently) be used to create new clusters or extend existing clusters. Similarly, new file systems can be created or extended. But this role does not remove existing nodes, disks, file systems or node classes. This is done on purpose — and this is also the reason why it can not be used, for example, to change the file system pool of a disk. Changing the pool requires you to remove and then re-add the disk from a file system, which is not currently in the scope of this role.
Furthermore, upgrades are not currently in scope of this role. Spectrum Scale supports rolling online upgrades (by taking down one node at a time), but this requires careful planning and monitoring and might require manual intervention in case of unforeseen problems.
The roles in this project store configuration files in /var/mmfs/tmp
on the first host in the play. These configuration files are kept to determine if definitions have changed since the previous run, and to decide if it's necessary to run certain Spectrum Scale commands (again). When experiencing problems one can simply delete these configuration files from /var/mmfs/tmp
in order to clear the cache — doing so forces re-application of all definitions upon the next run. As a downside, the next run may take longer than expected as it might re-run unnecessary Spectrum Scale commands. This will automatically re-generate the cache.
Please use the issue tracker to ask questions, report bugs and request features.
We welcome contributions to this project, see CONTRIBUTING.md for more details.
Please note: all playbooks / modules / resources in this repo are released for use "AS IS" without any warranties of any kind, including, but not limited to their installation, use, or performance. We are not responsible for any damage or charges or data loss incurred with their use. You are responsible for reviewing and testing any scripts you run thoroughly before use in any production environment. This content is subject to change without notice.
Copyright IBM Corporation 2020, released under the terms of the Apache License 2.0.