Wednesday, August 17, 2011

Quantifying Spindle:VM throughput relationship

Last week we took delivery of an additional DS4243 300Gb x 24 15 RPM disk shelf.
This morning we connected it non-disruptively to our Netapp 3270.
These 24 disks were slated to be assigned to our existing aggregate consisting of 2 x DS4243 shelves, effectively becoming 1/3 of a the spindles (IOPs and storage) of the newly expanded aggregate.
Before adding the disks to expand the aggregate we wanted to take a benchmark from a VM's perspective BEFORE and then compare it to the same VM's performance AFTER the 1/3 IOPs upgrade.

Config:

Netapp 3270 running Ontap 7.3.5.1P2
10Gb connection to ESXi 4.1 host
HD Tune is used for disk IO benchmark

BEFORE (46 disk Aggregate):

Read transfer rate
Transfer Rate Minimum : 21.5 MB/s
Transfer Rate Maximum : 95.9 MB/s

Transfer Rate Average : 65.8 MB/s
Access Time : 10.3 ms



















AFTER (67 Disk Aggregate):

Read transfer rate
Transfer Rate Minimum : 0.6 MB/s
Transfer Rate Maximum : 96.7 MB/s
Transfer Rate Average : 82.9 MB/s
Access Time : 6.67 ms




















Conclusion:

Throughput: Avg Transfer rate went from 65 to 82.9 (27.5% better)
Latency: Access Time went from 10.3 to 6.67ms (35% improvement)
Also you can clearly see the deviation from the average performance is much improved (The 2nd throughput graph shows transfers rate staying in a much tighter 80-90Mbsec range than the 1st smaller aggr) - this translates into more deterministic performance profile for our VI (the larger aggr can “soak up” the short IOP demand spikes that would have otherwise slowed down the smaller aggr)


Note: it was necessary to clone the "Before" VM to force WAFL to stripe out the VM data onto the newly added disks (Will WAFL do this automatically over time for existing data? Or will only NEW VMs realize the full 67 spindle benefits?)

2 comments:

udubplate said...

Great post. Love the benchmarks. Regarding your question at the end about the VM leveraging the full 67 spindles or not automatically, it would, but there are some caveats to be aware of. Depending on the scenario, a reallocate should be run. Given the number of spindles you added, it may be unecessary but it's all about how evenly you want i/o to be distributed. A larger amount of i/o will go to the new spindles since they are in effect empty from day 1 unless a reallocate is run. There are differing opinions on this but the general concensus is if you are adding a very small number of spindles to an aggregate, you need to reallocate. If you have you workloads very sensitive to i/o changes, you need to reallocate. Otherwise, if adding a large number of spindles and/or latency insensitive scenario, you may not need to.

There is a lot to read up on reallocates though to understand the different options. Here is one post but there are tons of them out there. Encourage reading a few of them. A lot is out on the NetApp communities forum as well on this topic:

http://www.wafl.co.uk/tag/reallocate/

smitty said...

udubplate is correct. Unless you ran a physical re-allocate, all you were testing is the performance of the new shelf you just added.

Is a disk transfer a read or a write? you'll always see great write performance with NetApp (unless disks are 100% busy or full) as the writes are collected into very fast NVRAM and flushed to disk in wide-stripe sequential writes.