This morning we connected it non-disruptively to our Netapp 3270.
These 24 disks were slated to be assigned to our existing aggregate consisting of 2 x DS4243 shelves, effectively becoming 1/3 of a the spindles (IOPs and storage) of the newly expanded aggregate.
Before adding the disks to expand the aggregate we wanted to take a benchmark from a VM's perspective BEFORE and then compare it to the same VM's performance AFTER the 1/3 IOPs upgrade.
Config:
Netapp 3270 running Ontap 7.3.5.1P2
10Gb connection to ESXi 4.1 host
HD Tune is used for disk IO benchmark
BEFORE (46 disk Aggregate):
Read transfer rate
Transfer Rate Minimum : 21.5 MB/s
Transfer Rate Maximum : 95.9 MB/s
Transfer Rate Average : 65.8 MB/s
Access Time : 10.3 ms

AFTER (67 Disk Aggregate):
Read transfer rate
Transfer Rate Minimum : 0.6 MB/s
Transfer Rate Maximum : 96.7 MB/s
Transfer Rate Average : 82.9 MB/s
Access Time : 6.67 ms

Conclusion:
Throughput: Avg Transfer rate went from 65 to 82.9 (27.5% better)
Latency: Access Time went from 10.3 to 6.67ms (35% improvement)
Also you can clearly see the deviation from the average performance is much improved (The 2nd throughput graph shows transfers rate staying in a much tighter 80-90Mbsec range than the 1st smaller aggr) - this translates into more deterministic performance profile for our VI (the larger aggr can “soak up” the short IOP demand spikes that would have otherwise slowed down the smaller aggr)
Note: it was necessary to clone the "Before" VM to force WAFL to stripe out the VM data onto the newly added disks (Will WAFL do this automatically over time for existing data? Or will only NEW VMs realize the full 67 spindle benefits?)