OpenStack Best Practices

Cinder

Supported OpenStack Releases

  • NexentaStor 4.x - Juno+

  • NexentaStor 5.x - Juno+

  • NexentaEdge 1.2 - Newton+

Feature List

Feature NexentaStror 4 NexentaStror 5.1 NexentaEdge 2.1 Required
Create volume implemented implemented implemented yes
Delete volume implemented implemented implemented yes
Attach volume implemented implemented implemented yes
Detach volume implemented implemented implemented yes
Extend volume implemented implemented implemented yes
Create snapshot implemented implemented implemented yes
Delete snapshot implemented implemented implemented yes
Create volume from snapshot implemented implemented implemented yes
Create volume from image implemented implemented implemented yes
Clone volume implemented implemented implemented yes
Create image from volume implemented implemented implemented yes
Volume stats implemented implemented implemented yes
Migrate volume implemented implemented (NFS only) not implemented no
Retype volume implemented not implemented not implemented no
Volume replication not implemented not implemented not implemented no
Consistency groups not implemented not implemented not implemented no
iSCSI LUN Mapping not implemented not implemented not implemented no
QoS (rate limit) not implemented not implemented not implemented no
Extend volume while attached implemented? implemented? implemented? no
Snapshot Revert not implemented implemented in Pike and later not implemented no

Cinder Driver Prerequisites

NexentaStor 4.0

  • Storage appliance must be configured and licensed

  • Volume (zpool) must be created

  • HA configured and VIP available

  • (NFS only) NFS share created

  • Storage Network configured between NS Appliance and OpenStack Hypervisors (Recommended 10GBE, MTU 9000)

NexentaStor 5.0

  • Storage appliance must be configured and licensed

  • Pool (zpool) must be created

  • (iSCSI only) - Volume group must be created

  • (NFS only) - File System must be created and shared over NFS

  • Storage Network configured between NS Appliance and OpenStack Hypervisors (Recommended 10GBE, MTU 9000)

NexentaEdge 1.2

  • System must be initialized and licensed

  • Cluster/Tenant/Bucket creates

  • NFS or iSCSI Gateway configured

  • Storage Network configured between NexentaEdge Gateway and OpenStack Hypervisors (Recommended 10GBE, MTU 9000)

Where to get Cinder Drivers?

It’s recommended to get the latest driver from Nexenta’s repository: https://github.com/Nexenta/cinder

The branches in the repository correspond with Openstack releases.

To following command can be used to download the exact version w/o having to switch branches

git clone -b stable/mitaka - this will download the exact version, no need to switch

Nexenta Drivers are located under the following path: https://github.com/Nexenta/cinder/tree/stable/mitaka/cinder/volume/drivers/nexenta

The path includes driver for NexentaStor 4.x, NexentaStor 5.x and NexnetaEdge 2.0. Make sure to copy the whole folder.

Installation Steps

  1. Determine cinder driver location path used in your environment

  2. Clone or download the correct version of the drivers, unzip if downloaded and copy to the cinder location. For example drivers for Mitaka release:

$ git clone -b stable/mitaka https://github.com/Nexenta/cinder nexenta-cinder
$ cp -rf nexenta-cinder/cinder/volume/drivers/nexenta /usr/lib/python2.7/dist-packages/cinder/volume/drivers
  1. Configure cinder.conf

  2. Restart Cinder Service

    a. Systemd based system: $ sudo systemctl restart openstack-cinder-volume.service

    b. Upstart/SysV based system: $ sudo service cinder-volume restart

NexentaStor 4.x NFS - List of all available options

Parameter name Default Choices Description
nexenta_dataset_compression on [on, off, gzip, gzip-1, gzip-2, gzip-3, gzip-4, gzip-5, gzip-6, gzip-7, gzip-8, gzip-9, lzjb, zle, lz4] Compression value for new ZFS folders.
nexenta_dataset_description Human-readable description for the folder.
nexenta_sparse False Boolean Enables or disables the creation of sparse datasets
nexenta_rrmgr_compression 0 1..9 Enable stream compression, level 1..9. 1 - gives best speed; 9 - gives best compression.
nexenta_rrmgr_tcp_buf_size 4096 TCP Buffer size in KiloBytes.
nexenta_rrmgr_connections 2 Number of TCP connections.
nexenta_shares_config /etc/cinder/nfs_shares File with the list of available nfs shares (NexentaStor 4 only)
nexenta_mount_point_base $state_path/mnt Base directory that contains NFS share mount points
nexenta_sparsed_volumes True Boolean Enables or disables the creation of volumes as sparsed files that take no space. If disabled (False), volume is created as a regular file, which takes a long time.
nexenta_nms_cache_volroot True Boolean If set True cache NexentaStor appliance volroot option value.

NexentaStor 4.x NFS minimal cinder.conf

[DEFAULT]
driver_ssl_cert_verify = False

[ns_nfs]
volume_driver = cinder.volume.drivers.nexenta.nfs.NexentaNfsDriver
volume_backend_name = ns_nfs
nexenta_shares_config = /etc/cinder/shares.cfg
nfs_shares_config = /etc/cinder/shares.cfg
nas_secure_file_operations = False

Note: For NexentaStor 4.x NFS driver a shares config file must be present. This file should consist of 1 or multiple lines with 2 columns separated by a space. The first column represents the NFS filesystem path for the mount command, and the second is url for Rest calls. Example:

10.0.0.1:/volumes/Vol1/nfs_share http://admin:nexenta@10.0.0.1:8457
10.0.0.100:/volumes/Vol2/cinder-volumes http://admin:secret@10.0.0.100:8457

NexentaStor 4.x iSCSI - List of all available options

Parameter name Default Choices Description
nexenta_host IP address of Nexenta SA
nexenta_rest_port 0 HTTP(S) port to connect to Nexenta REST API server. If it is equal zero, 8443 for HTTPS and 8080 for HTTP is used
nexenta_rest_protocol auto [http, https, auto] Use http or https for REST connection
nexenta_user admin User name to connect to Nexenta SA,
nexenta_password nexenta Password to connect to Nexenta SA
nexenta_dataset_compression on [on, off, gzip, gzip-1, gzip-2, gzip-3, gzip-4, gzip-5, gzip-6, gzip-7, gzip-8, gzip-9, lzjb, zle, lz4] Compression value for new ZFS folders.
nexenta_dataset_description Human-readable description for the folder.
nexenta_blocksize 4096 Block size for datasets (NStor4)
nexenta_sparse False Boolean Enables or disables the creation of sparse datasets
nexenta_rrmgr_compression 0 1..9 Enable stream compression, level 1..9. 1 - gives best speed; 9 - gives best compression.
nexenta_rrmgr_tcp_buf_size 4096 TCP Buffer size in KiloBytes.
nexenta_rrmgr_connections 2 Number of TCP connections.
nexenta_iscsi_target_portal_port 3260 Nexenta target portal port
nexenta_volume cinder SA Pool that holds all volumes
nexenta_target_prefix iqn.1986-03.com.sun:02:cinder- IQN prefix for iSCSI targets
nexenta_target_group_prefix cinder Prefix for iSCSI target groups on SA

NexentaStor 4.x iSCSI minimal cinder.conf

[DEFAULT]
driver_ssl_cert_verify = False

[ns_iscsi]
volume_driver = cinder.volume.drivers.nexenta.iscsi.NexentaISCSIDriver
volume_backend_name = ns_iscsi
nexenta_host = 10.0.0.1
nexenta_rest_port = 8457
nexenta_user = admin
nexenta_password = nexenta
nexenta_volume = tank

NexentaStor 5.x NFS - List of all available options

Parameter name Default Choices Description
nexenta_rest_address IP address of NexentaEdge management REST API endpoint, can have multiple comma separated values
nas_host Data IP address (VIP in case of HA)
nexenta_rest_port 0 HTTP(S) port to connect to Nexenta REST API server. If it is equal to zero, 8443 for HTTPS and 8080 for HTTP is used
nexenta_use_https True Boolean Use secure HTTP for REST connection
nexenta_user admin User name to connect to Nexenta SA,
nexenta_password nexenta Password to connect to Nexenta SA
nexenta_dataset_compression lz4 [on, off, gzip, gzip-1, gzip-2, gzip-3, gzip-4, gzip-5, gzip-6, gzip-7, gzip-8, gzip-9, lzjb, zle, lz4] Compression value for new ZFS datasets.
nexenta_dataset_description Human-readable description for the folder.
nexenta_mount_point_base $state_path/mnt Base directory that contains NFS share mount points
nexenta_sparsed_volumes True Boolean Enables or disables the creation of volumes as sparsed files that take no space. If disabled (False), volume is created as a regular file, which takes a long time.

NexentaStor 5.x NFS minimal cinder.conf

[DEFAULT]
driver_ssl_cert_verify = False

[ns5_nfs]
volume_driver = cinder.volume.drivers.nexenta.ns5.nfs.NexentaNfsDriver
volume_backend_name = ns5_nfs
nas_host = 10.0.0.1
nexenta_rest_address = 10.0.1.1
nexenta_rest_port = 8443
nas_share_path = pool1/nfs_share
nexenta_user = admin
nexenta_password = Nexenta@1
nas_mount_options = vers=4
nas_secure_file_operations = False

NexentaStor 5.x NFS HA cinder.conf

[DEFAULT]
driver_ssl_cert_verify = False

[ns5_nfs]
volume_driver = cinder.volume.drivers.nexenta.ns5.nfs.NexentaNfsDriver
volume_backend_name = ns5_nfs
nas_host = 10.0.0.1
nexenta_rest_address = 10.0.1.1,10.0.1.2
nexenta_rest_port = 8443
nas_share_path = pool1/nfs_share
nexenta_user = admin
nexenta_password = Nexenta@1
nas_mount_options = vers=4
nexenta_sparsed_volumes = True
nas_secure_file_operations = False

NexentaStor 5.x iSCSI - List of all available options

Parameter name Default Choices Description
nexenta_rest_address IP address of NexentaStor management REST API endpoint, can have multiple comma separated values
nexenta_host IP address of Nexenta SA
nexenta_rest_port 0 HTTP(S) port to connect to Nexenta REST API server. If it is equal zero, 8443 for HTTPS and 8080 for HTTP is used
nexenta_use_https True Boolean Use secure HTTP for REST connection
nexenta_user admin User name to connect to Nexenta SA,
nexenta_password nexenta Password to connect to Nexenta SA
nexenta_dataset_compression lz4 [on, off, gzip, gzip-1, gzip-2, gzip-3, gzip-4, gzip-5, gzip-6, gzip-7, gzip-8, gzip-9, lzjb, zle, lz4] Compression value for new ZFS datasets.
nexenta_dataset_description Human-readable description for the folder.
nexenta_ns5_blocksize 32 (kilobytes) Block size for datasets (Nstor5)
nexenta_sparse False Boolean Enables or disables the creation of sparse datasets
nexenta_iscsi_target_portal_port 3260 Nexenta target portal port
nexenta_volume cinder SA Pool that holds all volumes
nexenta_target_prefix iqn.1986-03.com.sun:02:cinder- IQN prefix for iSCSI targets
nexenta_target_group_prefix cinder Prefix for iSCSI target groups on SA
nexenta_volume_group iscsi Volume group for NStor5
nexenta_iscsi_target_portals

NexentaStor 5.x iSCSI cinder.conf minimal config

[DEFAULT]
driver_ssl_cert_verify = False

[ns5_iscsi]
volume_driver = cinder.volume.drivers.nexenta.ns5.iscsi.NexentaISCSIDriver
volume_backend_name = ns5_iscsi
nexenta_host = 10.0.0.1
nexenta_rest_address = 10.0.1.1
nexenta_rest_port = 8443
nexenta_user = admin
nexenta_password = Nexenta@1
nexenta_volume = pool1
nexenta_volume_group = iscsi

NexentaStor 5.x iSCSI cinder.conf HA config

[DEFAULT]
driver_ssl_cert_verify = False

[ns5_iscsi]
volume_driver = cinder.volume.drivers.nexenta.ns5.iscsi.NexentaISCSIDriver
volume_backend_name = ns5_iscsi
nexenta_host = 10.0.0.1
nexenta_rest_address = 10.0.1.1,10.1.1.2
nexenta_rest_port = 8443
nexenta_user = admin
nexenta_password = Nexenta@1
nexenta_volume = pool1
nexenta_volume_group = iscsi
nexenta_iscsi_target_portals = 10.0.0.2:3260,10.0.0.3:3261,10.0.0.4
nexenta_target_prefix = iqn.2005-07.com.nexenta:02:cinder
nexenta_target_group_prefix = cinder
nexenta_host_group_prefix = cinder
nexenta_luns_per_target = 128

NexentaEdge 1.2 iSCSI - List of all available options

Parameter name Default Choices Description
nexenta_rest_address IP address of NexentaEdge management REST API endpoint
nexenta_rest_port 0 HTTP(S) port to connect to Nexenta REST API server. If it is equal zero, 8443 for HTTPS and 8080 for HTTP is used
nexenta_rest_protocol auto [http, https, auto] Use http or https for REST connection
nexenta_blocksize 4096 Block size for datasets (NStor4)
nexenta_nbd_symlinks_dir /dev/disk/by-path NexentaEdge logical path of directory to store symbolic \links to NBDs.
nexenta_rest_user admin User name to connect to NexentaEdge
nexenta_rest_password nexenta Password to connect to NexentaEdge
nexenta_replication_count 3 NexentaEdge iSCSI LUN object replication count.
nexenta_encryption False Defines whether NexentaEdge iSCSI LUN object has encryption enabled.
nexenta_lun_container NexentaEdge logical path of bucket for LUNs
nexenta_iscsi_service NexentaEdge iSCSI service name
nexenta_client_address NexentaEdge iSCSI Gateway client address for non-VIP service
nexenta_chunksize 32768 NexentaEdge iSCSI LUN object chunk size

NexentaEdge 1.2 iSCSI cinder.conf minimal config

[nedge_iscsi]
volume_driver=cinder.volume.drivers.nexenta.nexentaedge.iscsi.NexentaEdgeISCSIDriver
volume_backend_name = nedge
nexenta_rest_address = 10.0.0.1
nexenta_rest_port = 8080
nexenta_rest_protocol = http
nexenta_iscsi_target_portal_port = 3260
nexenta_rest_user = admin
nexenta_rest_password = nexenta
nexenta_lun_container = cl/tn/bk
nexenta_iscsi_service = iscsi
nexenta_client_address = 10.0.1.1

After configuring the cinder.conf, restart the cinder-volume service

sudo service cinder-volume restart (may differ depending on OS)

NexentaStor 4.x vs. 5.x Options Conversion Table

4.x param 5.x param Description
uses same param for rest and data (nexenta_host) nexenta_rest_address 4.x does not have separate value for Rest API management
nexenta_rest_protocol nexenta_use_https
nexenta_folder volume_group iSCSI only for 5.x, both protocols for 4.x
nfs_shares_config nas_share_path 5.x does not use shares.cfg
nexenta_iscsi_target_portal_groups nexenta_iscsi_target_portals and nexenta_iscsi_target_portal_port 4.x exposes TPGs while 5.x creates them using list of portals (IPs)

iSCSI Multipath

Openstack Nova provides the ability to use iSCSI Multipath. To enable Multipath you need to add following line into nova.conf in the [libvirt] section:

[libvirt]
iscsi_use_multipath = True

For this change to take place you need to restart nova-compute service: $ sudo service restart nova-compute

Backup

This section describes how to configure the cinder-backup service and cinder NFS driver on top NexentaStor NFS share. Official documentation link: NFS backup driver

Example section for cinder.conf:

[DEFAULT]
backup_driver = cinder.backup.drivers.nfs
backup_share = 10.1.1.1:/pool/nfs/backup
backup_mount_options = vers=4

Note: 10.1.1.1 - IP address of NexentaStor, /pool/nfs/backup - NFS share path.

Steps for NexentaStor 4.x:

nmc@host1:/$ create folder pool/nfs/backup
nmc@host1:/$ share folder pool/nfs/backup nfs                                                                                           
Auth Type            : sys
Anonymous            : false
Read-Write           :
Read-Only            : 
Root                 : 
Extra Options        : uidmap=*:root:@10.1.1.2
Recursive            : true
Modifed NFS share for folder 'pool/nfs/backup'

Note: 10.1.1.2 - IP address of Openstack Cinder host.

Steps for NexentaStor 5.x:

CLI@host> filesystem create -p pool/nfs/backup
CLI@host> nfs share -o uidMap='*:root:@10.1.1.2' pool/nfs/backup

Note: 10.1.1.2 - IP address of Openstack Cinder host.

Cinder and Replication

  • Replication on Consistency group level

  • Replication of clones will result in a full filesystems (Not efficient from capacity perspective)

  • Cinder snapshots are omitted in replication in 5.1.x (We expect fix in 5.2FP1)

Troubleshooting

grep for "Traceback" in your Openstack logs folder, default is

/var/log/openstack-project, for example: /var/log/cinder/cinder-volume.log

Most of the errors related to storage are in Cinder or Nova logs.

If the error is not self explanatory, enable the debug logging, restart the service and try to reproduce the error. Debug loggings will trace all calls to Nexenta, which allows to narrow down the possible cause of the error.

To enable debug in cinder, add the following line to cinder.conf:

[DEFAULT]
debug=True

And restart cinder-volume: sudo service cinder-volume restart

Glance

What it is:

How to set it up:

Prerequisites

Steps

Validation

Manila

Overview

ToDo

Supported operations are:

  • Create NFS share.

  • Delete NFS share.

  • Extend NFS share

  • Allow NFS share access.

  • Only IP access type is supported for NFS.

  • RW and RO access is supported.

  • Deny NFS share access

  • Create snapshot

  • Delete snapshot

  • Create share from snapshot

  • Thin/thick provisioning

Requirements

  • NexentaStor Appliance pre-provisioned and licensed, etc

  • OpenStack Preprovisioned with Manila Plugin

How to setup Manila Plugin

Deployment

Create DevStack user:

root# useradd -s /bin/bash -d /opt/stack -m stack
root# echo "stack ALL=(ALL) NOPASSWD: ALL" | tee /etc/sudoers.d/stack
root# passwd stack

Deploy DevStack environment:

stack$ git clone https://git.openstack.org/openstack-dev/devstack
stack$ cd devstack
stack$ cat local.conf <<'EOF'
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
USE_SCREEN=True
RECLONE=True
enable_plugin manila https://github.com/openstack/manila
EOF

stack$ ./stack.sh

manila.conf driver section examples:

NStor4:

[DEFAULT]
enabled_share_backends = ns4_nfs
enabled_share_protocols = NFS

[ns4_nfs]
service_instance_user = manila
service_image_name = manila-service-image
path_to_private_key = /home/ubuntu/.ssh/id_rsa
path_to_public_key = /home/ubuntu/.ssh/id_rsa.pub
share_backend_name = <backend name to be used in share_types>
driver_handles_share_servers = False
share_driver = manila.share.drivers.nexenta.nexenta_nas.NexentaNasDriver
nexenta_host = <Nexenta appliance IP>
nexenta_volume = <volume name on appliance>
nexenta_nfs_share = <nfs_share_name>
nexenta_user = <NexentaStor username>
nexenta_password = <NexentaStor password>
nexenta_thin_provisioning = False/True

NStor5:

[DEFAULT]
enabled_share_backends = ns5_nfs
enabled_share_protocols = NFS

[ns5_nfs]
service_instance_user = manila
service_image_name = manila-service-image
path_to_private_key = /home/ubuntu/.ssh/id_rsa
path_to_public_key = /home/ubuntu/.ssh/id_rsa.pub
share_backend_name = <backend name to be used in share_types>
driver_handles_share_servers = False
share_driver = manila.share.drivers.nexenta.nexenta_nas.NexentaNasDriver
nexenta_host = <Nexenta appliance IP>
nexenta_rest_port = 8443
nexenta_volume = <pool name on appliance>
nexenta_share = <dataset name within the pool>
nexenta_user = <NexentaStor username>
nexenta_password = <NexentaStor password>
nexenta_thin_provisioning = False/True

List of all available options:

Parameter name Default Choices Description
nexenta_host IP address of Nexenta storage appliance.
nexenta_rest_port 8457 Port to connect to Nexenta REST API server.
nexenta_retry_count 6 Number of retries for unsuccessful API calls.
nexenta_rest_protocol auto [http, https, auto] Use http or https for REST connection .
nexenta_user admin User name to connect to Nexenta SA.
nexenta_password Password to connect to Nexenta SA.
nexenta_volume volume1 Volume name on NexentaStor4
nexenta_pool pool1 Pool name on NexentaStor5.
nexenta_mount_point_base $state_path/mnt Base directory that contains NFS share mount points.
nexenta_nfs_share nfs_share Parent folder on NexentaStor that will contain all manila folders.
nexenta_dataset_compression on [on, off, gzip, gzip-1, gzip-2, gzip-3, gzip-4, gzip-5, gzip-6, gzip-7, gzip-8,gzip-9, lzjb, zle, lz4] Compression value for new ZFS folders.
nexenta_dataset_dedupe off [on, off, sha256, verify] Deduplication value for new ZFS folders.
nexenta_thin_provisioning True Boolean If True shares will not be space guaranteed and overprovisioning will be enabled.'

Escalating Issue to support

Please provide the following information:

  • NexentaStor/NexentaEdge version
  • OpenStack version ( e.g. Icehouse, Juno, Kilo, Liberty, Mitaka)
  • nova-manage version (for reference use - https://wiki.openstack.org/wiki/Releases)
  • Cinder driver version ** go to <cinder_lib_location>/cinder/volume/drivers/nexenta/nfs.py (or /iscsi.py): comments on top have version
  • Openstack service type ( e.g. Cinder, Glance, Manila, Swift)
  • OS version (e.g. Ubuntu 14.04, RHEL 7.0.x, CentOS 7.0.x) ** cat /etc/system-release
  • HA configuration ( HA, active-active) ; provide cluster status info using nmc - c "show group rsf-cluster"
  • Collector bundle
  • Copy of Cinder drivers folder ** cinder/volumes/drivers/ Cinder.conf file ** /etc/cinder/cinder.conf (default path) Cinder volume log ** /var/log/scheduler.log (default path) Cinder scheduler log ** /var/log/scheduler.log (default path)
  • Steps to reproduce the issue, screenshot of console log, any custom scripts that customer ran, etc