I have used IOMeter for a long period to test different types of storage performance. It has a simple and informative GUI and is very flexible in terms of test patterns and testing results are quite holistic and logical.
Being a DOS child I still sometimes prefer doing things using shell and keyboard instead of clicking. That’s why I have been pleasantly surprised with Microsoft’s DiskSpd utility. This command line tool provides the functionality needed to generate a variety of disk request patterns, which might be very helpful in diagnosing and analyzing of I/O performance issues with a great flexibility and less effort than other benchmark tools. Furthermore, Microsoft recommends using DiskSpd to test storage performance of Storage Spaces and Azure, so it might be extremely useful for synthetic storage subsystem testing and that is exactly what I am going to try today using StarWind Virtual SAN as the testing object.
In order to do this, I will need to download DiskSpd. To make further usage easier, I will copy the executable file to a short and simple path like C:\DiskSpd. In most cases you will want the 64-bit version of DiskSpd from the amd64fre folder.
Once I have the diskspd.exe executable available, I will open a command prompt with administrative rights (by choosing “Run as Administrator”) and navigate to the C:\DiskSpd directory.
Here are some of the command line parameters that I am willing to start out with:
-b Block size of the I/O where –b8K means an 8KB block size
-d Test duration in seconds
-h Disable software caching at the operating system level and hardware writes caching
-o Outstanding I/Os per target or per worker thread
-t Worker threads per test file target
-r Random or sequential flag
-w Write percentage where –w25 means 25% writes and 75% reads
-L measure latency statistics
I have two servers armed with 2 x Intel Xeon CPU E5-2620 @ 2.00Ghz and 64GB RAM each. For testing purposes, I will use 6 x 600GB 10K RPM 64MB Cache SAS Enterprise Hard Drives in RAID10 array. 2 x 10GBit directly connected network links offer iSCSI connections and synchronization.
The first step would be to test local disk performance on a single host and then compare its performance with StarWind Virtual SAN highly available iSCSI connected device mirrored between both hosts.A minimal set of tests for a virtualized environment should include mostly randomized various block sizes reads/writes. I usually use random 4K, 32K and 64K blocks trying to emulate I/O pattern of virtualized Hyper-V environment and do read and write tests separately for better comparison.
First of all, I am going to create an RAID10 array using my 6 x SAS drives.
And disable RAID and disks caching policies to test rough storage performance without cache impact.
Let’s create a new simple NTFS volume and format it leaving allocation unit size by default. The letter of the new volume Test is D:
I am almost ready to do the testing so I am starting a command prompt with administrative privileges and heading to DiskSPD folder. The first test will be a 60 seconds long 4k blocks random read test using 32 outstanding I/Ops and 16 worker threads on 50GB test file:
Diskspd.exe -b4k -d60 -h -o32 -t16 -r -L -c50G D:\Test.io > D:\4k-reads.txt
The second test is 4k writes with exactly the same input parameters:
Diskspd.exe -b4k -d60 -h -o32 -t16 -r -L -c50G -w100 D:\Test.io > D:\4k-writes.txt
And the rest of the testing patterns:
Diskspd.exe -b32k -d60 -h -o32 -t16 -r -L -c50G D:\Test.io > D:\32k-reads.txt
Diskspd.exe -b32k -d60 -h -o32 -t16 -r -L -c50G -w100 D:\Test.io > D:\32k-writes.txt
Diskspd.exe -b64k -d60 -h -o32 -t16 -r -L -c50G D:\Test.io > D:\64k-reads.txt
Diskspd.exe -b64k -d60 -h -o32 -t16 -r -L -c50G -w100 D:\Test.io > D:\64k-writes.txt
Resulting text files contain all the information about performed test including input parameters, CPU usage, total I/Ops, MBps and average latency. Most important for us is obviously this part:
As the next step, I recommend running the same set of tests on the second host to make sure that both storage arrays are having the same performance.
In order to go ahead with the next step, we need to configure local storage on the second server exactly the same way as we did before. I have StarWind Virtual SAN already installed on both servers. After starting StarWind Management Console, I am adding both hosts to the list and creating 100GB HA device without caching that resides on the same D: partition.
After connecting all the paths (I did it twice to avoid any bottlenecks in iSCSI sessions and targets) in iSCSI Initiator and setting MPIO policy to Round Robin, I have opened Disk Management Console once again and initialized newly created HA disk as GPT. Now I create a new simple NTFS volume and format it leaving allocation unit size by default. The letter of the new volume StarWind is F:
Everything is ready to start with next set of tests.
Diskspd.exe -b4k -d60 -h -o32 -t16 -r -L -c50G F:\Test.io > F:\4k-reads.txt
Diskspd.exe -b4k -d60 -h -o32 -t16 -r -L -c50G -w100 F:\Test.io > F:\4k-writes.txt
Diskspd.exe -b32k -d60 -h -o32 -t16 -r -L -c50G F:\Test.io > F:\32k-reads.txt
Diskspd.exe -b32k -d60 -h -o32 -t16 -r -L -c50G -w100 F:\Test.io > F:\32k-writes.txt
Diskspd.exe -b64k -d60 -h -o32 -t16 -r -L -c50G F:\Test.io > F:\64k-reads.txt
Diskspd.exe -b64k -d60 -h -o32 -t16 -r -L -c50G -w100 F:\Test.io > F:\64k-writes.txt
We have finished running storage performance tests. Let’s compare the numbers.
Obtained values are pretty predictable and strictly reproducible using other tools like IOmeter.
StarWind Virtual SAN gains up to 50% more read performance due to striping read traffic over the network and loses only 15% of write performance because of Ethernet and iSCSI stack impact on worst testing scenarios.
Direct correlation of throughput and latency along with reproducible numbers allows me to highly recommend using DiskSPD tool for storage subsystem testing purposes.