search
top

Optimal Network Adaptor Settings for VMXNET3 and Windows 2008 R2

There is an ongoing debate between many admins on what are the best settings for the VMXNET3 driver on Windows 2008 R2 settings and I suppose there will be many more. In this postI will attempt to point out some of the options and recommended settings for the VMXNET3 adaptor.

 Global Settings

Receive Side Scaling (RSS)

Receive-Side Scaling (RSS) resolves the single-processor bottleneck by allowing the receive side network load from a network adapter to be shared across multiple processors. RSS enables packet receive-processing to scale with the number of available processors. This allows the Windows Networking subsystem to take advantage of multi-core and many core processor architectures.

By default RSS is set to enabled. To disable RSS you must open a command prompt and type:

netsh int tcp set global rss=disabled

There is also a second RSS settings that is in the VMXNET3 adaptor properties under the Advanced tab, which is disabled by default. Enable it by selecting from the dropdown.

This is a beneficial setting if you have multiple vCPU’s on the server. If this is a single vCPU then you will receive no benefit.

If you have multiple vCPU’s it is recommended to have RSS enabled.

netsh int tcp set global rss=enabled

References

http://technet.microsoft.com/en-us/network/dd277646.aspx

TCP Chimney Offload

TCP Chimney Offload is a networking technology that helps transfer the workload from the CPU to a network adapter during network data transfer. In Windows Server 2008, TCP Chimney Offload enables the Windows networking subsystem to offload the processing of a TCP/IP connection to a network adapter that includes special support for TCP/IP offload processing.

For VMXNET3 on ESXi 4.x, 5.0 and 5.1 TCP Chimney Offload is not supported; turning this off or on has no affect. This is discussed in several places.

References

http://www-01.ibm.com/support/docview.wss?uid=isg3T1012648

http://support.microsoft.com/kb/951037

The Microsoft KB951037 article is of interest because it includes a table that shows how TCP Chimney interacts with programs and services and gives insight to where you can gain the most from this feature. By default this setting is enabled.

As for the use of TCP Chimney Offload is to disable as it is not recognized by VMXNET3. To disable do the following.

Open a command prompt with administrative credentials.

At the command prompt, type the following command, and then press ENTER:

netsh int tcp set global chimney=disabled

To validate or view TCP Chimney

netsh int tcp show global

Recommended setting: disabled

 NetDMA State

NetDMA provides operating system support for direct memory access (DMA) offload. TCP/IP uses NetDMA to relieve the CPU from copying received data into application buffers, reducing CPU load.

Requirements for NetDMA

  • NetDMA must be enabled in BIOS
  • CPU must support Intel I/O Acceleration Technology (I/OAT)

You cannot use TCP Chimney Offload and NetDMA together.

Recommended setting: disabled

TCP Receive Windows Auto-Tuning Level

This feature determines the optimal receive window size by measuring the BDP and the application retrieve rate and adapting the window size for ongoing transmission path and application conditions.

Receive Window Auto-Tuning enables TCP window scaling by default, allowing up to a 16MB maximum receive window size. As the data flows over the connection, it monitors the connection, measures its current BDP and application retrieve rate, and adjusts the receive window size to optimize throughput. This replaces the TCPWindowSize registry value.

Receive Window Auto-Tuning has a number of benefits. It automatically determines the optimal receive window size on a per-connection basis. In Windows XP, the TCPWindowSize registry value applies to all connections. Applications no longer need to specify TCP window sizes through Windows Sockets options. And IT administrators no longer need to manually configure a TCP receive window size for specific computers.

By default this setting is enabled, to disable it open a command prompt with administrative permission and type:

netsh int tcp set global autotuninglevel=disabled

Recommended setting: disabled

References

http://technet.microsoft.com/en-us/magazine/2007.01.cableguy.aspx

Add-On Congestion Control Provider

The traditional slow-start and congestion avoidance algorithms in TCP help avoid network congestion by gradually increasing the TCP window at the beginning of transfers until the TCP Receive Window boundary is reached, or packet loss occurs. For broadband internet connections that combine high TCP Window with higher latency (high BDP), these algorithms do not increase the TCP windows fast enough to fully utilize the bandwidth of the connection.

Compound TCP, CTCP increases the TCP send window more aggressively for broadband connections (with large RWIN and BDP). CTCP attempts to maximize throughput by monitoring delay variations and packet loss. It also ensures that its behavior does not impact other TCP connections negatively.

By default, it is on by default under Server 2008. Turning this option on can significantly increase throughput and packet loss recovery.

To enable CTCP, in elevated command prompt type:

netsh int tcp set global congestionprovider=ctcp

To disable CTCP:

netsh int tcp set global congestionprovider=none

Possible options are:  ctcp, none, default (restores the system default value).

Recommended setting: ctcp

ECN Capability

ECN (Explicit Congestion Notification) is a mechanism that provides routers with an alternate method of communicating network congestion. It is aimed to decrease retransmissions. In essence, ECN assumes that the cause of any packet loss is router congestion. It allows routers experiencing congestion to mark packets and allow clients to automatically lower their transfer rate to prevent further packet loss. Traditionally, TCP/IP networks signal congestion by dropping packets. When ECN is successfully negotiated, an ECN-aware router may set a bit in the IP header (in the DiffServ field) instead of dropping a packet in order to signal congestion. The receiver echoes the congestion indication to the sender, which must react as though a packet drop were detected.

ECN is disabled by default, as it is possible that it may cause problems with some outdated routers that drop packets with the ECN bit set, rather than ignoring the bit.

To change ECN, in elevated command prompt type:

netsh int tcp set global ecncapability=default

Possible settings are: enabled, disabled, default (restores the state to the system default).

The default state is: disabled

ECN is only effective in combination with AQM (Active Queue Management) router policy. It has more noticeable effect on performance with interactive connections and HTTP requests, in the presence of router congestion/packet loss. Its effect on bulk throughput with large TCP Window is less clear.

Currently, it is not recommended enabling this setting, as it has negative impact on throughput.

Recommended setting is disabled

netsh int tcp set global ecncapability=disabled

Direct Cache Access (DCA)

Direct Cache Access (DCA) allows a capable I/O device, such as a network controller, to deliver data directly into a CPU cache. The objective of DCA is to reduce memory latency and the memory bandwidth requirement in high bandwidth (Gigabit) environments. DCA requires support from the I/O device, system chipset, and CPUs.

To enable DCA:

netsh int tcp set global dca=enabled

Available states are: enabled, disabled.

Default state: disabled

Recommended setting is disabled

To disable DCA:

netsh int tcp set global dca=disable

These are just settings that I have used successfully in the VMware environment and work well. You can pick and choose the settings that work best for your environment.

13 Responses to “Optimal Network Adaptor Settings for VMXNET3 and Windows 2008 R2”

  1. Mick Putley says:

    Thanks for this invaluable article. Do you have any recommendations for Server 2012R2 using this NIC. We recently moved a file system from Physical 2008 R2 to Virtual 2012 R2 and our Marketing folks are seeing the performance from the Macs drop through the floor. (No complaints from the 16,000 Windows users, just the 16 Mac users. Thanks for any thoughts.

    • newlife007 says:

      If you are running Server 2012 R2 on VMWare I would be using this NIC instead of E1000 or any other NIC. You are missing out on using all the advantages of VMWare without it. I am using this NIC on Server 2012 R2 Datacenter and have had no issues at all.

    • Kwame says:

      Just checking if there is an update on this:

      Is there an update available for vSphere 5.5 with regard to the settings for Windows 2008 R2 and Windows 2012 R2?

      I know that with vSPhere 5.5, Large Receive Offload (LRO), Receive-Side Scaling (RSS), and TCP Segmentation Offload (TSO) are now supported.

      Thanks

  2. Michael van says:

    Great article. Is there an update available for vSphere 5.5 with regard to the settings for Windows 2008 R2 and Windows 2012 R2?

    I know that with vSPhere 5.5, Large Receive Offload (LRO), Receive-Side Scaling (RSS), and TCP Segmentation Offload (TSO) are now supported.

  3. graeme says:

    Very good read, was wondering if your reccomendations suited 2008 Std. Looking to replace this server mid next year, currently having session freezes for 20 – 60 secs up to 4 to 5 times a day, random. 2008 Terminal Server with about 6 users on it. All sessions freeze at once and come back with no loss of data during freeze. Can ping whole time.

  4. VAdmin says:

    Hi man,

    So how did you go with the test on ESXi 5.1 or 5.5 ?
    We have some Win2k8 R2 VMs that is having problem with the TCP retransmission and also TCP resets, so I wonder if you’ve seen this case before ?

    • newlife007 says:

      So far we have only a handful of guests running on 5.1 / 5.5 and they are using the same settings with no issues. We will see more once we get some of the heavy hitters on there.

  5. Jay says:

    There are a couple of inconsistencies that I find confusing:

    Regarding TCP Chimney Offload: First you say, “For VMXNET3 on ESXi 4.x, 5.0 and 5.1 TCP Chimney Offload is not supported; turning this off or on has no affect.” But then later you advise disabling it. If it has no effect either way then why bother?

    More significantly, regarding TCP Receive Windows Auto-Tuning Level: Everything you say in your description of this is positive and beneficial to performance. But then at the end you say, “Recommended setting: disabled.” Huh?


    Jay

  6. Tony says:

    I would really be interested in an updated article for Server 2012 R2 with VMXNET3 on ESXi 6.0.

    • newlife007 says:

      Give me a bit and will test it out, just built an ESXi 6 system.

    • Vadmin says:

      Tony, Let us know if the below netsh batch script must be executed to all Windows Server VMs using VMXnet 3 or not:

      netsh int ip set global taskoffload=disabled
      netsh int tcp set global autotuning=disabled
      netsh int tcp set global chimney=disabled
      netsh int tcp set global congestion=none
      netsh int tcp set global rss=disabled
      netsh interface tcp set global autotuninglevel=disabled

    • newlife007 says:

      Tony,
      I just got to testing settings on VMXNET3 on Windows Server 2012 R2 and just created an updated post that addresses the newer recommended settings. See if that helps you out.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

top