Monthly Archives: April 2013

Disabling Large Send Offload – Windows

In an earlier post, I described the Large Send Offload (LSO) feature of modern Ethernet adapters and why it can cause havoc with network performance.  And since this is enabled by default, you have to manually disable it.  For Windows, this can be done in the Ethernet adapter properties (which I prefer) or in the TCP/IP network stack.  I'll start with disabling LSO in the TCP/IP network stack since Microsoft uses some confusing terms that you'll want to be familiar with.

Micosoft refers to LSO as "TCP Chimney Offload".  For Windows Server 2008 and later, it's described in MS Support Article 951037.  LSO was first supported in Windows Server 2003 with the release of the Scalable Networking Pack, which integrated into Service Pack 2, and is described in MS Support Article 912222.

Disabling LSO on Windows Server 2003

This has to be done by editing the registry.  Open regedit and locate this key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters

Then right-click on EnableTCPChimney and select Modify from the pop-up menu.  Change the value to 0 and click on OK.  You can also use this REG command (this command is one line but shown here here on two lines for clarity):

reg add HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters /v EnableTCPChimney
    /t REG_DWORD /d 0

 

Windows must be restarted before the change will take effect.

Disabling LSO on Windows Server 2008 and higher

This is easily done using a NETSH command:

netsh interface tcp set global chimney=disabled

 

Disabling LSO on the Ethernet adapter

This works in all versions of versions of Windows Server since it's done at the driver level.  Go to where the network adapters are located in the Control Panel.  For Windows Server 2003, this will be under Network Connections.  For Windows Server 2008, this will be under Network and Sharing Center  –>  Change Adapter Settings.

Now right-click on the network adapter and choose Properties from the pop-up menu.  At the top of this windows will be a "Connect using" text field with the vendor and model of the network adapter.  For my example, I'm using an Intel 52575 Gigabit adapter.  Just below this text field, click on the Configure button.

Now click on the Advanced tab, which shows the configurable properties for the adapter.  Find the entry for Large Send Offload.  This is how it's labeled on Intel adapters, but will vary (sometimes wildly) for adapters from other other vendors.  If it's modern adapter like this one, there will be a setting for both IPv4 and IPv6.  For older adapters, there will only be a setting for IPv4.  Change the value for Large Send Offload from "Enabled" (or "On") to "Disabled" (or "Off") and click on OK.

Intel NIC Properties

W A R N I N G:  Changing any of the adapter policies causes the driver to be restarted.  There will be a brief (1 – 2 min) loss of network connectivity.

Posted in Networking, Windows.

What’s Block Size Got To Do With It?

(With apologies to Tina Turner.)

On WMware ESXi/Vsphere versions through 4.X, how big a file can be on a datastore is completely dependent on the block size specified when the datastore was formatted.  And this in turn drives the maximum size of any virtual hard disk.  For many virtual machines, this might not be an issue.  But if you want to create a virtual hard disk greater than 256 GB, then you need to pay attention!

VMFS-2 and VMFS-3

By default, VMware File System (VMFS) uses a 1 MB block size, which supports a maximum file size of 256 GB on VMFS-3 and 456 GB on VMFS-2.  (No, you read that right.  The maximum file size actually went down between VMFS-2 and VMFS-3.)  To support larger file sizes, you need to use a larger block size when the datastore is formatted.

Block Size VMFS-2 Disk Size VMFS-3 Disk Size
1 MB 456 GB 256 GB*
2 MB 912 GB 512 GB*
4 MB 1.78 TB 1 TB*
8 MB 2 TB 2 TB – 512 bytes
16 MB 2 TB ** Not Valid **
32 MB 2 TB ** Not Valid **
64 MB 2 TB ** Not Valid **

* On ESXi 4.0, 512 bytes is subtracted from the maximum file size for any block size on a VMFS-3 datastore.  On ESXi 4.1, this only occurs when the block size is 8 MB.

Once the block size is set, the only way to change it is to reformat the datastore.  Which means moving the existing data elsewhere as a formatting a datastore, like formatting any disk, destorys the existing data.

VMFS-5

In ESXi/VSphere 5.0, VMware introduced VMFS-5 (there is no VMFS-4) which uses a unified 1 MB block size that cannot be configured.  But the maximum file size is now 2 TB – 512 bytes, so block size no longer matters.  As is the case with any new feature, it's only available on ESXi/VSphere 5.0 and higher.

If you upgrade an older ESXi/VSphere to 5.0 or higher and the existing datastores use VMFS-3, you can upgrade the datastores to VMFS-5.  This is a non-destructive process, meaning the upgrade can be done with live on the datastore.  You should always have backups in case anything goes wrong!  Upgrading from VMFS-3 to VMFS-5 will not give all the VMFS-5 features, but you will get most of them.

Further Reading

See VMware Knowledge Base Article 1003565 for details on block size and how it effects the maximum file size.

See VMFS-5 Upgrade Considerations for information about upgrading an existing VMFS-3 datastore to VMFS-5.

Posted in VMware.

Large Send Offload and Network Performance

One issue that I continually see reported by customers is slow network performance.  Although there are literally a ton of issues that can effect how fast data moves to and from a server, there is one fix I've found that will resolve this 99% of time — disable Large Send Offload on the Ethernet adapter.

So what is Large Send Offload (also known as Large Segmetation Offload, and LSO for short)?  It's a feature on modern Ethernet adapters that allows the TCP\IP network stack to build a large TCP message of up to 64KB in length before sending to the Ethernet adapter.  Then the hardware on the Ethernet adapter — what I'll call the LSO engine — segments it into smaller data packets (known as "frames" in Ethernet terminology) that can be sent over the wire. This is up to 1500 bytes for standard Ethernet frames and up to 9000 bytes for jumbo Ethernet frames.  In return, this frees up the server CPU from having to handle segmenting large TCP messages into smaller packets that will fit inside the supported frame size.  Which means better overall server performance.  Sounds like a good deal.  What could possibly go wrong?

Quite a lot, as it turns out.  In order for this to work, the other network devices — the Ethernet switches through which all traffic flows — all have to agree on the frame size.  The server cannot send frames that are larger than the Maximum Transmission Unit (MTU) supported by the switches.  And this is where everything can, and often does, fall apart.

The server can discover the MTU by asking the switch for the frame size, but there is no way for the server to pass this along to the Ethernet adapter.  The LSO engine doesn't have ability to use a dynamic frame size.  It simply uses the default standard value of 1500 bytes,or if jumbo frames are enabled, the size of the jumbo frame configured for the adapter.  (Because the maximum size of a jumbo frame can vary between different switches, most adapters allow you to set or select a value.)  So what happens if the LSO engine sends a frame larger than the switch supports?  The switch silently drops the frame.  And this is where a performance enhancement feature becomes a performance degredation nightmare.

To understand why this hits network performance so hard, let's follow a typical large TCP message as it traverses the network between two hosts.

  1. With LSO enabled, the TCP/IP network stack on the server builds a large TCP message.
  2. The server sends the large TCP message to the Ethernet adapter to be segmented by its LSO engine for the network.  Because the LSO engine cannot discover the MTU supported by the switch, it uses a standard default value.
  3. The LSO engine sends each of the frame segments that make up the large TCP message to the switch.
  4. The switch receives the frame segments, but because LSO sent frames larger than the MTU, they are silently discarded.
  5. On the server that is waiting to receive the TCP message, the timeout clock reaches zero when no data is received and it sends back a request to retransmit the data.  Although the timeout is very short in human terms, it rather long in computer terms.
  6. The sending server receives the retransmission request and rebuilds the TCP message.  But because this is a retransmission request, the server does not send the TCP message to the Ethernet adapter to be segmented.  Instead, it handles the segmentation process itself.  This appears to be designed to overcome failures caused by the offloading hardware on the adapter.
  7. The switch receives the retransmission frames from the server, which are the proper size because the server is able to discover the MTU, and forwards them on to the router.
  8. The other server finally receives the TCP message intact.

This can basicly be summed up as offload data, segment data, discard data, wait for timeout, request retransmission, segment retransmission data, resend data.  The big delay is waiting for the timeout clock on the receiving server to reach zero.  And the whole process is repeated the very next time a large TCP message is sent.  So is it any wonder that this can cause severe network performance issues.

This is by no means an issue that effects only Peer 1.  Google is littered with artices by major vendors of both hardware and software telling their customers to turn off Large Send Offload.  Nor is it specific to one operating system.  It effects both Linux and Windows.

I've found that Intel adapters are by far the worst offenders with Large Send Offload, but Broadcom also has problems with this as well.  And, naturally, this is a feature that is enabled by default on the adapters, meaning that you have to explicitly turn it off in the Ethernet driver (preferred) or server's TCP/IP network stack.

In the next article, I'll describe how to turn off Large Send Offload on both Linux and Windows systems.

Posted in Linux, Networking, Windows.
All information in this blog is provided "AS IS" with no warranties and confers no rights.
The opinions expressed in this blog are mine alone and do not represent those of my employer.
 
Powered By PEER 1 Managed Hosting
POWERED BY PEER 1 MANAGED HOSTING