search
top

Optimal Network Adaptor Settings for VMXNET3 and Windows 2008 R2

There is an ongoing debate between many admins on what are the best settings for the VMXNET3 driver on Windows 2008 R2 settings and I suppose there will be many more. In this postI will attempt to point out some of the options and recommended settings for the VMXNET3 adaptor.

 Global Settings

Receive Side Scaling (RSS)

Receive-Side Scaling (RSS) resolves the single-processor bottleneck by allowing the receive side network load from a network adapter to be shared across multiple processors. RSS enables packet receive-processing to scale with the number of available processors. This allows the Windows Networking subsystem to take advantage of multi-core and many core processor architectures.

By default RSS is set to enabled. To disable RSS you must open a command prompt and type:

netsh int tcp set global rss=disabled

There is also a second RSS settings that is in the VMXNET3 adaptor properties under the Advanced tab, which is disabled by default. Enable it by selecting from the dropdown.

This is a beneficial setting if you have multiple vCPU’s on the server. If this is a single vCPU then you will receive no benefit.

If you have multiple vCPU’s it is recommended to have RSS enabled.

netsh int tcp set global rss=enabled

References

http://technet.microsoft.com/en-us/network/dd277646.aspx

TCP Chimney Offload

TCP Chimney Offload is a networking technology that helps transfer the workload from the CPU to a network adapter during network data transfer. In Windows Server 2008, TCP Chimney Offload enables the Windows networking subsystem to offload the processing of a TCP/IP connection to a network adapter that includes special support for TCP/IP offload processing.

For VMXNET3 on ESXi 4.x, 5.0 and 5.1 TCP Chimney Offload is not supported; turning this off or on has no affect. This is discussed in several places.

References

http://www-01.ibm.com/support/docview.wss?uid=isg3T1012648

http://support.microsoft.com/kb/951037

The Microsoft KB951037 article is of interest because it includes a table that shows how TCP Chimney interacts with programs and services and gives insight to where you can gain the most from this feature. By default this setting is enabled.

As for the use of TCP Chimney Offload is to disable as it is not recognized by VMXNET3. To disable do the following.

Open a command prompt with administrative credentials.

At the command prompt, type the following command, and then press ENTER:

netsh int tcp set global chimney=disabled

To validate or view TCP Chimney

netsh int tcp show global

Recommended setting: disabled

 NetDMA State

NetDMA provides operating system support for direct memory access (DMA) offload. TCP/IP uses NetDMA to relieve the CPU from copying received data into application buffers, reducing CPU load.

Requirements for NetDMA

  • NetDMA must be enabled in BIOS
  • CPU must support Intel I/O Acceleration Technology (I/OAT)

You cannot use TCP Chimney Offload and NetDMA together.

Recommended setting: disabled

TCP Receive Windows Auto-Tuning Level

This feature determines the optimal receive window size by measuring the BDP and the application retrieve rate and adapting the window size for ongoing transmission path and application conditions.

Receive Window Auto-Tuning enables TCP window scaling by default, allowing up to a 16MB maximum receive window size. As the data flows over the connection, it monitors the connection, measures its current BDP and application retrieve rate, and adjusts the receive window size to optimize throughput. This replaces the TCPWindowSize registry value.

Receive Window Auto-Tuning has a number of benefits. It automatically determines the optimal receive window size on a per-connection basis. In Windows XP, the TCPWindowSize registry value applies to all connections. Applications no longer need to specify TCP window sizes through Windows Sockets options. And IT administrators no longer need to manually configure a TCP receive window size for specific computers.

By default this setting is enabled, to disable it open a command prompt with administrative permission and type:

netsh int tcp set global autotuninglevel=disabled

Recommended setting: disabled

References

http://technet.microsoft.com/en-us/magazine/2007.01.cableguy.aspx

Add-On Congestion Control Provider

The traditional slow-start and congestion avoidance algorithms in TCP help avoid network congestion by gradually increasing the TCP window at the beginning of transfers until the TCP Receive Window boundary is reached, or packet loss occurs. For broadband internet connections that combine high TCP Window with higher latency (high BDP), these algorithms do not increase the TCP windows fast enough to fully utilize the bandwidth of the connection.

Compound TCP, CTCP increases the TCP send window more aggressively for broadband connections (with large RWIN and BDP). CTCP attempts to maximize throughput by monitoring delay variations and packet loss. It also ensures that its behavior does not impact other TCP connections negatively.

By default, it is on by default under Server 2008. Turning this option on can significantly increase throughput and packet loss recovery.

To enable CTCP, in elevated command prompt type:

netsh int tcp set global congestionprovider=ctcp

To disable CTCP:

netsh int tcp set global congestionprovider=none

Possible options are:  ctcp, none, default (restores the system default value).

Recommended setting: ctcp

ECN Capability

ECN (Explicit Congestion Notification) is a mechanism that provides routers with an alternate method of communicating network congestion. It is aimed to decrease retransmissions. In essence, ECN assumes that the cause of any packet loss is router congestion. It allows routers experiencing congestion to mark packets and allow clients to automatically lower their transfer rate to prevent further packet loss. Traditionally, TCP/IP networks signal congestion by dropping packets. When ECN is successfully negotiated, an ECN-aware router may set a bit in the IP header (in the DiffServ field) instead of dropping a packet in order to signal congestion. The receiver echoes the congestion indication to the sender, which must react as though a packet drop were detected.

ECN is disabled by default, as it is possible that it may cause problems with some outdated routers that drop packets with the ECN bit set, rather than ignoring the bit.

To change ECN, in elevated command prompt type:

netsh int tcp set global ecncapability=default

Possible settings are: enabled, disabled, default (restores the state to the system default).

The default state is: disabled

ECN is only effective in combination with AQM (Active Queue Management) router policy. It has more noticeable effect on performance with interactive connections and HTTP requests, in the presence of router congestion/packet loss. Its effect on bulk throughput with large TCP Window is less clear.

Currently, it is not recommended enabling this setting, as it has negative impact on throughput.

Recommended setting is disabled

netsh int tcp set global ecncapability=disabled

Direct Cache Access (DCA)

Direct Cache Access (DCA) allows a capable I/O device, such as a network controller, to deliver data directly into a CPU cache. The objective of DCA is to reduce memory latency and the memory bandwidth requirement in high bandwidth (Gigabit) environments. DCA requires support from the I/O device, system chipset, and CPUs.

To enable DCA:

netsh int tcp set global dca=enabled

Available states are: enabled, disabled.

Default state: disabled

Recommended setting is disabled

To disable DCA:

netsh int tcp set global dca=disable

These are just settings that I have used successfully in the VMware environment and work well. You can pick and choose the settings that work best for your environment.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

top

Switch to our mobile site