Image via Wikipedia
please also see
(to be published on 1/24/2013)
A brief update to the iSCSI vs. NFS in a VMWare environment based on 2012-2013 technologies.
Time for an update about storage protocols, I say protocols, because the transport layer is something that is in flux right now. NFS, iSCSI, Fibre Channel, and FCOE (Fibre Channel over Ethernet) are storage PROTOCOLS! It is pretty important to understand this as base knowledge when discussing storage connectivity to a VMWare environment. So what does this mean for you.
Recent published White Paper from VMWare shows in little detail the testing they did for these protocols. However, this assumes that ALL storage and storage solutions are equal, which we all know NOT to be the case. The white paper can be found here. So let me first explore what they tested, and then we can look at some changes to the components.
So I have when a software company does things like this, they take a “storage server” and emulate different storage protocols in order to test them. This configuration used 9 disks, how it was configured, what type of “storage server” wasn’t disclosed, and how the LUNs were emulated wasn’t disclosed either. So I can’t really make too many judgments, because they will end up being assumptions and we all know what you get from that. I can however say that this was not “purpose built” and seemed to have been thrown together for this test, most likely using some Linux distribution. The problem with this approach is it doesn’t take into account the massive development that a storage vendor has gone through either with hardware or software to get the most out of a protocol.
Looking just at the basic configuration, comparing GigE ethernet (with any protocol) compared to 4Gb FC doesn’t seem like a fare test. Looking closely, VMware really was trying to prove out scalability regardless of storage protocol. They also pointed out clearly that not using the VMFS might have a significant impact on performance, but they excluded it in order to not give a block level device an advantage over NFS mounted storage. They also clearly pointed out that an update to this White paper was needed in order to consider 8Gb FC and 10GigE, which I think will be a much closer comparison of technologies. If you look at the differences in performance, 4Gb FC was almost exactly 4x the performance of any of the 1Gb Ethernet used in the test, which I would expect. It also showed that the packet size/block size was a significant differentiator because of the extra processing that needs to occur with a 1500MTU packet limitation on GigE.
The point is simple, why even bother putting out a “White Paper” that really proves nothing, well, it does prove something. When using basic volumes iSCSI and NFS have pretty much equal performance over a single GigE connection. NFS has more of a CPU overhead (minimally) then software initiator based iSCSI, and if you are counting clock cycles, then you really need to go with iSCSI or FC HBAs. I would have rather seen NFS compared to VMFS over the iSCSI connections, that would have at least compared similar access functionality.
So changes I would like to see, compare 1Gb FC to GigE for these protocols. Use the best of breed NAS, iSCSI, and FC arrays for these tests (oh that would create a field day for us out here in the stoblogosphere). Compare like functionality, NFS offers shared access, VMFS offers shared access. In any case, someone has to write something over there, I would have named this What Paper, “VMWare 4.0 Scales Regardless of Storage Protocol”.
Comments always welcome.