Configure array of U.2 drives for SSD pool instead of SATA SSDs?
Opened this issue · 4 comments
In my quest to max out the 10G link for my editing pool, I would like to investigate using an array of U.2 NVMe drives instead of the Samsung QVO SATA SSDs I'm currently using.
Parts I would need:
- Some sort of U.2 drive (maybe contact Kioxia? I might have a PM6 around...)
- An SFF-8654 74pin to 2*SFF-8639 68pin Cable — SlimSAS to U.2 (or two if I do 4 drives)
- An SSD Mounting Bracket to hold the drives in the back
If I did this upgrade, I'd likely yank the 4x SATA SSDs so I can keep those drive bays clear for future HDD vdev expansion.
Tasks:
- Install new drives
- Configure zpool for NVMe storage
- Configure samba shares for NVMe storage (ingest, zowiebox)
- Get ZowieBox to connect to samba share for direct ingest
- Configure sanoid/syncoid backups so snapshots + backups are present on NAS02 for
nvmepool
I did contact Kioxia, and they may send over something a bit more substantial than the PM6! Heh.
jgeerling@nas01:~$ zpool status -v nvmepool
pool: nvmepool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
nvmepool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme-KIOXIA_KCD8XRUG15T3_8240A01KTY97 ONLINE 0 0 0
nvme-KIOXIA_KCD8XRUG15T3_8240A01MTY97 ONLINE 0 0 0
errors: No known data errors
Quick performance baseline, comparing the HDD pool to the NVMe pool:
HDD Pool
| Benchmark | Result |
| -------------------------- | ------ |
| iozone 4K random read | 752.48 MB/s |
| iozone 4K random write | 230.74 MB/s |
| iozone 1M random read | 7763.99 MB/s |
| iozone 1M random write | 1646.86 MB/s |
| iozone 1M sequential read | 7787.13 MB/s |
| iozone 1M sequential write | 1438.63 MB/s |
NVMe Pool
| Benchmark | Result |
| -------------------------- | ------ |
| iozone 4K random read | 736.14 MB/s |
| iozone 4K random write | 269.40 MB/s |
| iozone 1M random read | 7362.35 MB/s |
| iozone 1M random write | 3694.74 MB/s |
| iozone 1M sequential read | 7373.37 MB/s |
| iozone 1M sequential write | 3692.56 MB/s |
Obviously both scores are impacted by ZFS caching. I will only really get a feel for how it works out by accessing the data over the NVMe over my 10G LAN connection to my Mac and seeing how it compares. Hopefully can just saturate that connection 24x7!