microsoft/diskspd

Running with -Z above a certain size fails with "Error generating I/O requests"

danieltharp opened this issue · 4 comments

Diskspd is failing immediately when trying to run with -Z500G. The point at which it works actually seems to be dependent on some factor I haven't really determined, but it changes depending on the disk topology I set up and tended to be around the 200G mark for a 13TB storage array. Using the -v parameter doesn't give any further information. Attaching a failed log, please let me know what I can do to help investigate.

diskspd.txt

dl2n commented

Hey Dan, thank you for the info. For full disclosure I'm relaying the issue and request from DBAs who indicate that the flag did work at 500G on another server but not a candidate disk topology we were looking to move to, and that was a point of concern for them. My understanding is more or less that you're saying, the -Z(n)(size) parameter chooses a block of randomness for the sake of eliminating dedupe and compression as a potential skewing factor or to more accurately imitate the production workload with regards to dedupe and compression, and that it has no significant benefit or parallel to a prod workload beyond that. Is that fair to say?

dl2n commented

That is exactly what it does - it allocates a buffer of <size> bytes and generates a random fill across it. That is then indexed randomly to provide the source buffer to the WriteFile API. As to the implications for any storage system you'll have to evaluate what that means.

It is completely independent of disk topology, aside from the fact that perhaps (probably) the amount of pagefile space to let DISKSPD grow an additional 500GiB of committed VA may not have been available.

Great, thank you for the confirmation and for your time. It's a great piece of software.