Which is faster iSCSI or NFS when used as an ESXi datastore
I have a QNAP NAS (which is an excellent bit kit by the way), and I have 2 servers running ESXi 4 and 5 respectively. I had some free time tonight so finally got around to setting up both an NFS and an iSCSI datastore on the NAS. I was curious as to which would be the fastest so here are my very unscientific results.
My test setup here is a QNAP TS459 Pro+ Turbo NAS with 4x 2TB 7200rpm disks, RAID 5, teamed Gigabit NICs. My network is Zyxel based Gigabit throughout.
I created an NFS share and an iSCSI target+ LUN following QNAPs own VMware instructions.
I then attached both to the same ESXi 5 server, shutdown all other VMs on that server and built 2 identical Ubuntu 12.04 servers, one on the NFS datastore and the other on the iSCSI datastore.
Build time was almost exactly the same so nothing in it there. Similarly boot up time was pretty much identical.
Once the machines where up and running I ran a few disk tests to see which had the higher throughput, here are the commands and the results, (i ran the commands on each machine one at a time so there would be no contention between the 2 boxes):
x | NFS | iSCSI |
---|---|---|
hdparm test 1 | 3014.48 MB/s | 3544.86 MB/s |
hdparm test 2 | 89.90 MB/s | 119.93 MB/s |
dd caching test | 643 MB/s | 731 MB/s |
dd no caching test | 194 MB/s | 107 MB/s |
HDparm test 1 and 2 are performed using this command:
hdparm -tT /dev/sda5
DD test with caching enabled
time sh -c "dd if=/dev/zero of=/tmp/output bs=100k count=1k; rm -f /tmp/output"
DD test with write sync
time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync"
So the conclusion here seems to be that iSCSI is between 15-20% quicker in my particular case except when disk caching is not allowed, in which case NFS was consistently faster in my tests. However as write caching is enabled on the OS I’m not too worried about the NFS result and will stick to iSCSI.
As an aside, NFS was easier to set up and has a few other side benefits over an iSCSI LUN.