You are here: Blog Jumbo Frames | Dell Compellent | CT-SCv3000 | VMware ESXi | Windows Server | Cisco Nexus 3000 , 3172

Jumbo Frames | Dell Compellent | CT-SCv3000 | VMware ESXi | Windows Server | Cisco Nexus 3000 , 3172

They say jumbos frames are faster.

A normal frame MTU is 1500.

A jumbo frame MTU is 9000.

All devices (server-nics, iscsi-hbas, switches and san-nics) need to support and be configured for jumbo frames.

Windows Physical Server MTU Settings

For each network adapter or iSCSI HBA on the Windows Physical Server:

via command line:
netsh interface ipv4 show subinterfaces
netsh interface ipv4 set subinterfaces [indexnumber] mtu=xyzz store=persistent

via gui:
open network & internet settings.
change adapter options.
right-click adapter name.
advanced tab.
set to 9014.

VMware ESXi Host MTU Settings (vmk nic)

In vSphere, click:
find the VMK.
view/edit the MTU settings.

Dell Compellent SC Series Storage MTU Settings

Dell Compellent | SC 280184 | CT-SCv3000 | SC-Series are all the same. Commonly referred to as Compellent, the line of storage is being phased out in favor of simplifying Dell offerings.

The jumbo frames need to be set in the fault-domains. This can be done via the web gui but most Dell EMC ProSupport techs I talk to like to use the Dell Storage Manager Client (DSMC).

  • find ADVANCED > MTU (towards the bottom)
  • set to 9000 (JUMBO)

Setting this will automatically set the MTU on the physical ports.



Also note that the Compellent has data-tier's; fast, medium,slow. The idea is to put SSD's in the fast, 15K's/10K's in the medium and 7K's in the slow.

It will automatically try to get you to set to Automatic tiering.

But if you only have all the same drives, put the volume on Tier 1 as it gives better performance.


Cisco Nexus 3172T | Nexus 3000 Series

This is a tough one.

In large corporate networks, different teams handle different areas. For the sake of discussion, let's say there is a virtual/VMware team and a networking team.

From what I can tell, VMware SysAdmins have trouble explaining to network admins what is needed. I find this is mostly a lack of understanding of networking by the VMware sysadmins. I don't criticize them; it is confusing. Especially when converged systems have an abstract layer.

On the other side, good networking teams are hard to find. Anyone can type the commands if it is in a work instruction but actually knowing and understanding the concepts and diagnosing the situation-at-hand is farther and fewer between than you might imagine.

A good networking team will want to have a proper datacenter setup with top-of-rack (TOR) and aggregation switches using Cisco Nexus switches. These are setup in a VPC fail-over. Note that this is not a stack. Like a stack, they do communicate to each other. But unlike a stack, they are independent. So if one fails, the other takes over. The communication is simply for knowing what the other is doing. Not for traffic.

As a result, the VMware Sysadmins don't understand this VPC concept and gravatate towards stacking with Cisco Catalyst switches. Plug 2 x48 port switches together in the back and they show as a 96 port single switch. Simply plug everything in and boom dot done. Of course on the con side of the coin, this is a single point of failure. If a switch fails, there is an outage.

The trade-off here is that they don't have to involve the network team as much. It gives VMware Sysadmins more control and they like that.

We try to do things properly around here. We are using a Cisco Nexus 3172T - Nexus 3000 series. 

While the higher-end untis of Nexus 7000-series and the Nexus 9000-series have per-port MTU settings, the Cisco Nexus 3172T does not have per port MTU settings. The MTU settings need to be set on the QOS policy.

Getting this to work properly is a bear.

Dell has some docs on the setup here:





sh queuing interf e1/7

sh int prio

sho int flowcontrol

 sh int e1/7 | i i mtu
(yes, double "i"; first is for "include" & second is for "ignore case")

sh int e1/7 | grep ig mtu

sh class-map

sh policy-map


Test Jumbo Frames

You can test the jumbo by using ping:

ping -f -l 1472
ping -f -l 1473

ping -f -l 8972
ping -f -l 8973

The 1472 will test the MTU set at 1500. Overhead needs 28 bytes; 20 bytes for IP header + 8 bytes for ICMP header.
The 1473 should fail.

The 8972 will test the MTU set at 9000. Overhead needs 28 bytes; 20 bytes for IP header + 8 bytes for ICMP header.
The 8973 should fail.

Another tool to use is mturoute, just search and download:



Contact Dak Networks

We are not taking on new clients at this time.