As we are getting some newer gear, I was wondering what is a better setup for network-based storage in our lab (all our hosts will have >1 interface, and our storages - either 4x1Gb or some with 10Gb).
NFS: Am I correct to assume that NFS, from a single host, will not benefit much from bonding, when connected to a single storage domain (= a single mount). Isn't it using a single TCP connection? (which btw implies we might get better perf. with multiple mounts?).
iSCSI: Am I better off configuring multipathing rather than bonding for iSCSI, for the same reason? Configure two IPs on two interfaces on the host, same on the storage? (which btw hints at why iSCSI from the QEMU level is an interesting feature). Should I 'cheat' and configure multiple IPs on the bonded interfaces on the storage side?
For both I'm sure jumbo frames would be nice, and we will be using it partially in our labs.
TIA, Y.
Hi Yaniv
Sorry for the delay in responding, I wanted to run this by our team to make sure that you could benefit from their experiences as well. I have included some responses from Ben inline (b>>>>).
-mark
On 11/20/2011 08:21 AM, Yaniv Kaul wrote:
As we are getting some newer gear, I was wondering what is a better setup for network-based storage in our lab (all our hosts will have >1 interface, and our storages - either 4x1Gb or some with 10Gb).
NFS: Am I correct to assume that NFS, from a single host, will not benefit much from bonding, when connected to a single storage domain (= a single mount). Isn't it using a single TCP connection? (which btw implies we might get better perf. with multiple mounts?).
b>>>> My own experience is that NFS like any IP app can benefit greatly from bonding. The advantage of bonding is that you don't have to manually balance application network traffic across NICs, and it provides high-availability in event of port or cable failure. I've seen up to 300 MB/s with a 4-way 1-Gbps bond with mode balance-alb (6) or trunking (4) or balance-xor (2). Each mode has its limitations. I have no experiencing bonding with 10-GbE but I see no reason why 10-GbE wouldn't work in principle. Modes 2 (balance by TCP port + IP addr ) or 6 work best when a node is communicating with multiple other nodes in parallel. Mode 4 requires that the network switch be configured for trunking, but is the least restrictive. Mode 6 uses ARP to load balance peer MAC addresses across NICs, but usually works only within a single LAN.
b>>>> Yes NFS is using a single connection from KVM host to NFS server for a given block device, but in my experience with NetApp at least, performance was pretty good even so, and it is possible to tune TCP for higher-network-latency environments.
b>>> on the other hand, if you want to reserve network bandwidth for particular block devices, not using bonding, or using 2 pairs of bonded NICs could help.
iSCSI: Am I better off configuring multipathing rather than bonding for iSCSI, for the same reason? Configure two IPs on two interfaces on the host, same on the storage? (which btw hints at why iSCSI from the QEMU level is an interesting feature). Should I 'cheat' and configure multiple IPs on the bonded interfaces on the storage side?
For both I'm sure jumbo frames would be nice, and we will be using it partially in our labs.
b>>>> For ISCSI/TCP or NFS/TCP Jumbo frames are essential for bulk transfer workloads Just be careful when using non-TCP protocols (e.g. NFS/UDP) with jumbo frames.
TIA, Y.
On 11/28/2011 03:39 PM, Mark Wagner wrote:
Hi Yaniv
Sorry for the delay in responding, I wanted to run this by our team to make sure that you could benefit from their experiences as well. I have included some responses from Ben inline (b>>>>).
-mark
Thanks, we've done (and continue to do) some of our own tests. No concrete conclusions yet. See additional notes below.
On 11/20/2011 08:21 AM, Yaniv Kaul wrote:
As we are getting some newer gear, I was wondering what is a better setup for network-based storage in our lab (all our hosts will have
1 interface, and our storages - either 4x1Gb or some with 10Gb).
NFS: Am I correct to assume that NFS, from a single host, will not benefit much from bonding, when connected to a single storage domain (= a single mount). Isn't it using a single TCP connection? (which btw implies we might get better perf. with multiple mounts?).
b>>>> My own experience is that NFS like any IP app can benefit greatly from bonding. The advantage of bonding is that you don't have to manually balance application network traffic across NICs, and it provides high-availability in event of port or cable failure. I've seen up to 300 MB/s with a 4-way 1-Gbps bond with mode balance-alb (6) or trunking (4) or balance-xor (2). Each mode has its limitations. I have no experiencing bonding with 10-GbE but I see no reason why 10-GbE wouldn't work in principle. Modes 2 (balance by TCP port + IP addr ) or 6 work best when a node is communicating with multiple other nodes in parallel. Mode 4 requires that the network switch be configured for trunking, but is the least restrictive. Mode 6 uses ARP to load balance peer MAC addresses across NICs, but usually works only within a single LAN.
- I haven't tested balance_tlb/alb, but I don't see how it'll help with a single connection. I think the only viable solution is going 10Gb in such case(s). - Mode 4 with xmit_hash_policy set to layer3+4 is nice, but 'is not fully 802.3ad compliant', so I'll try it out for fun, but am not happy with it. - Mode 2 with layer3+4 xmit_hash_policy - sounds interesting for iSCSI - if I manage to get multiple connections (see below) - (IIRC, mode 6 is not supported over bridges or something)
b>>>> Yes NFS is using a single connection from KVM host to NFS server for a given block device, but in my experience with NetApp at least, performance was pretty good even so, and it is possible to tune TCP for higher-network-latency environments.
b>>> on the other hand, if you want to reserve network bandwidth for particular block devices, not using bonding, or using 2 pairs of bonded NICs could help.
iSCSI: Am I better off configuring multipathing rather than bonding for iSCSI, for the same reason? Configure two IPs on two interfaces on the host, same on the storage? (which btw hints at why iSCSI from the QEMU level is an interesting feature). Should I 'cheat' and configure multiple IPs on the bonded interfaces on the storage side?
For both I'm sure jumbo frames would be nice, and we will be using it partially in our labs.
b>>>> For ISCSI/TCP or NFS/TCP Jumbo frames are essential for bulk transfer workloads Just be careful when using non-TCP protocols (e.g. NFS/UDP) with jumbo frames.
Jumbo frames are not a good (enough) solution for two reasons: 1. They require the whole switch to be configured with support for it (at least our Cisco switches), which is a bit limiting. We have done it on a dedicated switch. 2. They only give ... +5% or so improvement? That's perhaps all I'll be able to do for NFS, but I'm hoping to do more for iSCSI:
Our current idea is to create the storage domain's VG out of two LUNs - on two different targets. We have to have two connections - one to each target, and have those connections run in parallel (and NOT share a single physical link). We have partially succeeded to do so, but are still working on it.
BTW, our biggest performance improvement so far came from changing the block size of the backing storage to 32K (instead of the default 8K)... Y.
TIA, Y.
vdsm-devel@lists.stg.fedorahosted.org