Disk quantity mismatch between ARM template and scripts
Closed this issue · 2 comments
themorey commented
In what area(s)?
/area azlustre
Expected Behavior
I selected 2 disks for the MGS/MDS in the ARM template but the scripts only configured 1 disk...the other is not formatted/used.
Similarly with the OSS I selected 6 disks but only 5 were used.
Actual Behavior
MGS/MDS:
sda 2:0:0:0 30G
├─sda1 500M /boot
├─sda2 29G /
├─sda14 4M
└─sda15 495M /boot/efi
sdb 4:0:0:0 1T
sdc 4:0:0:1 1T /mnt/mgsmds
formatting backing filesystem ldiskfs on /dev/sdc
target name LustreFS:MDT0000
kilobytes 1073741824
options -J size=4096 -I 1024 -i 2560 -q -O dirdata,uninit_bg,^extents,dir_nlink,quota,huge_file,flex_bg -E lazy_journal_init -F
mkfs_cmd = mke2fs -j -b 4096 -L LustreFS:MDT0000 -J size=4096 -I 1024 -i 2560 -q -O dirdata,uninit_bg,^extents,dir_nlink,quota,huge_file,flex_bg -E lazy_journal_init -F /dev/sdc 1073741824k
OSS:
sda 2:0:0:0 30G
├─sda1 500M /boot
├─sda2 29G /
├─sda14 4M
└─sda15 495M /boot/efi
sdb 4:0:0:0 1T
sdc 4:0:0:1 1T
└─sdc1 1024G
sdd 4:0:0:2 1T
└─sdd1 1024G
sde 4:0:0:3 1T
└─sde1 1024G
sdf 4:0:0:4 1T
└─sdf1 1024G
sdg 4:0:0:5 1T
└─sdg1 1024G
creating raid (/dev/md10) from 5 devices : /dev/sd[c-m]
devices= /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg
Steps to Reproduce the Problem
I believe the issue stems from using a VM with no ephemeral disk. In my case I used the Das_v5 so the first available data disk is sdb
but that is typically the ephemeral disk.
edwardsp commented
Fixed using Azure symlinks rather than globs from /dev