2

We are running a number of Debian Wheezy VMs on top of Ubuntu Servers 12.04.4 / Libvirt 0.9.8-2ubuntu17.17

The host is connected to the network through a trunk. It then split the VLANs and create a bridge for each of them with the following conf:

auto eth4.2 kvmbrtrunk.2 iface eth4.2 inet manual up ifconfig eth4.2 up down ifconfig eth4.2 down iface kvmbrtrunk.2 inet manual bridge-ports eth4.2 bridge-stp no bridge-fd 0 bridge-maxwait 0 

The VMs are configured as follow:

 <interface type='bridge'> <mac address='54:52:00:02:10:70'/> <source bridge='kvmbrtrunk.2'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </interface> 

And they use VirtIO

00:09.0 Ethernet controller: Red Hat, Inc Virtio network device 

Protagonists (all on same VLAN):

A: 1st Ubuntu 12.04 desktop B: 2nd Ubuntu 12.04 desktop C: 1st VM, 1st host D: 2nd VM, 1st host E: 3rd VM, 2nd host 

When we do a serie of 60 pings "rtt min/avg/max/mdev":

A -> B = 0.093/0.132/0.158/0.015 ms A -> C = 0.272/0.434/1.074/0.113 ms A -> D = 0.294/0.460/0.832/0.091 ms A -> E = 0.324/0.505/0.831/0.069 ms C -> D = 0.348/0.607/0.863/0.124 ms C -> E = 0.541/0.792/0.972/0.101 ms 

So those results seem to indicate that the libvirt's virtual switch/filtering not only adds some latency, as one might expect, but triples it (0.132 vs 0.460)

Question

 Is there anything that can be done to attenuate this extra latency? 

Thanks in advance for any tips.

2
  • 1
    I just have to ask what are you doing that causes .300ms latency to be significant to worry about? Commented May 9, 2014 at 12:04
  • Remote Desktop | Remote X applications | NFS servers Commented May 13, 2014 at 15:18

1 Answer 1

1

What kind of features are you willing to sacrifice for this reduction in latency?

To start out with, try disabling iptables/ebtables on the bridge interfaces. You can change /proc/sys/net/bridge/bridge-nf-call-iptables to 0 to accomplish this. The downside here is you can no longer do any sort of filtering on the guest traffic.

The 'better' option here is to switch to virtualized network cards, using SR-IOV. This requires that your motherboard and network controller support it. You're also limited to 7 guests per network card (for gigabit ethernet, I believe 10gig is higher). This works by giving each guest direct access to the networking hardware. The hosts OS is not involved in the packet flow, the VM just talks directly to the NIC.

SR-IOV will give you the best performance (CPU usage was around 10% lower in our tests, for high levels of network traffc), and lowest latency (as there are far fewer layers of software interacting with the packets). I believe you can configure vlan tagging with this, but the setup would likely be a bit difficult (SR-IOV is basically undocumented magic, and you'll be doing a lot of fiddling with settings).

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.