Hyper-V / VMM 2012 R2 and VMQ, Part 1

Microsoft has been gaining ground in the virtualization sphere one step at a time since Hyper-V first premiered. While the some increments were negligible (or merely painstakingly obvious), they achieved significant breakthroughs in late 2013 with the release of all things “2012 R2”. The puzzle piece on which we’ll focus here is VMQ (specifically dynamic VMQ, or dVMQ).

VMQ gives Hyper-V and System Center Virtual Machine Manager (VMM) Logical Switches what Receive Side Scaling (RSS) provides to physical servers; namely, it leverages multiple compute cores/interrupts to increase network traffic efficiency. The network teaming (or Load-Balancing Fail-Over, LBFO) configuration is important here, because it affects how VMQ maps queues to processors. The full table of possibilities is given halfway down the page of TechNet’s VMQ Deep Dive, Part 2. In a nutshell, some configurations need NIC queues to overlap the same processors (so that all queues are everywhere), while others need segregation (so every queue has its own unique core).

In our environment, we have a switch independent team with dynamic hashing (new to Windows Server 2012 R2), so “Sum of Queues” is how we should be set. Given that our Hyper-V hosts have two QLogic QLE8262 10Gbps CNAs with one port per card in use and four CPU sockets with ten cores each, we can allocate up to 16 queues per active CNA port but will stick to 8 in the examples below (the card determines the max queues, but that many CPU cores may not exist in the system). Take note: hyper-threading makes a difference, too. Since it is enabled in our environment, the relevant cores are even numbers, starting at zero (i.e. 0, 2, 4, 6, 8…). The other key here is the exclusion of the first core, zero, as the system uses it for primary functions that are best left uncontested.

To implement proper, non-overlapping queues for VMQ in our setup, we use the following PowerShell commands:

List our network adapters and current settings:
Configure queues on the first interface:
Set-NetAdapterVMQ -Name “SLOT 2 Port 1” -BaseProcessorNumber 2 -MaxProcessors 8
Configure queues on the second interface:
Set-NetAdapterVMQ -Name “SLOT 2 Port 1” -BaseProcessorNumber 18 -MaxProcessors 8
Verify the new configuration:
At this point, queues should begin to be assigned to virtual machines on this host, assuming they are connected to a Logical Switch in VMM and have VMQ enabled on the port profile. Check with with the command: Get-NetAdapterVMQQueue. If you see VMs in the right column, you’re in business.

In the next part, we’ll unpack the situation we and a few others in the global community are facing.

Leave a Reply

Your email address will not be published.