There is a NEWER version of this article!
Please see v2 of the 10GbE recommendations and troubleshooting document.
This article includes basic information about 10 Gigabit Ethernet (10GbE), as well as configuration recommendations, expected throughput examples, and troubleshooting steps that can help our users achieve optimum results with their 10GbE-enabled EVO shared storage system.
10-gigabit Ethernet (10GbE) has provided the storage and networking world with new and virtually unprecedented rates of data transferal. Yet, for as much as it has brought to the table, the correct deployment and maintenance of a 10GbE network is still a subject of debate and occasional difficulty for users. While 10GbE has radically improved upon gigabit Ethernet, it is a common misconception that it will always and everywhere provide ten times the speed of a typical gigabit connection. Though this is indeed the theoretical limit of the protocol, normal network environments are only rarely able to achieve full bandwidth saturation. However, we have taken every precaution both in the design and testing of our EVOs prior to shipment to enable our customers to reach the maximum speeds possible in a conventional network environment.
Please keep in mind that throughput numbers throughout this article only reflect the results of a single workstation writing to or reading from the EVO system, and are not indicative of the total throughput and speed an EVO is capable of delivering to multiple users. (Refer to the Latest Metrics section for more details on single workstation throughput results using various protocols.)
Checklist and basic suggestions
Setting the MTU
To ensure the best performance from an EVO utilizing a 10GbE connection, there are a few settings which should first be enabled on both EVO and the host workstation. Firstly, it is imperative that the maximum transmission unit (MTU) of both EVO and the workstation's 10GbE network card be set to the same value. Though it is generally recommended that the 10GbE port's default MTU of 9000 be used, there is no significant difference in keeping this at the older value of 1500 provided EVO, the workstation, and any switch in-between are set accordingly. To check the MTU currently set for your EVO's Ethernet interface, navigate to the EVO UI's Connectivity pane and scroll to the appropriate 10GbE card. After expanding the 10GbE port's drop-down menu, you should notice something like the following.
As you can see, this EVO currently has an MTU of 9000.
Now, we must verify that the 10GbE card is using an MTU of the same rate. In OS X, this can be done by navigating to the Network pane of System Preferences, clicking the appropriate network connection, then navigating to Advanced and Hardware. There you should find the MTU currently set for your 10GbE card. As of September 2014, a bug in recent versions of OS X requires the Speed option to be set to "autoselect" to achieve an MTU of 9000. If you find that System Preferences is only presenting you with a maximum MTU of 1500, your 10GbE card's drivers may not have been installed properly, or you may have selected the wrong network device.
In Windows, the host configuration's MTU can be verified by finding the appropriate network card in Device Manager, and then clicking Properties. (Some adapters may have their own management utilities/applications.) From this view you should see an option for "Jumbo MTUs." Be sure that if this is set to 9000 on EVO, it is also set in the field indicated here.
No matter which operating system your workstation uses, it is very important that this value match across EVO, all workstations, and all network switches. In the course of troubleshooting our 10GbE technical support incidents we have found that keeping them at different rates has almost invariably led to an erratic or occasionally even an inoperative connection.
Changes made to the MTU speed in EVO will be applied immediately upon saving the page, while changes made to the network interface in both OS X and Windows will require reconnecting to the previously mounted iSCSI or NAS shares.
This article is deprecated!
Please see v2 of the 10GbE recommendations and troubleshooting document.
Kernel tuning
Given the differences between how 10GbE is implemented by various hardware companies, your card may also require some special "tuning" before it is able to reach the speeds expected for a mature multimedia workflow. In most cases, these options must be set manually. Some 10GbE cards, such as the Solarflare series, offer to include these optimizations with the driver install; others, like those produced by Myricom, involve the editing of a text file. While we will be providing some basic recommendations for these options as they affect 10GbE performance, please keep in mind that manufacturers' recommendations change often, and that the best place to look for hardware-specific performance tweaks remains the release notes to the latest version of your 10GbE card's driver. It is also important to note that optimizing your network card for one protocol, such as iSCSI, may adversely affect the performance of another like AFP. This is why some manufacturers provide their users a selection of drivers to choose from, each with a different intended use case and configuration.
Mac OS X
In Mac OS X, the primary way of applying 10GbE tunings is through the editing of the /etc/sysctl.conf
file. All options written to this file are relayed to the kernel, the heart of the operating system, and then applied to the appropriate hardware interface – in this case the 10GbE card. As these settings directly change how the kernel "talks" to the workstation hardware and the local network, it is imperative that they be applied carefully and with the permission of your network administrator.
ATTO 10Gb installers have an option to automatically optimize settings during install, so no tuning may be required.
Taking into consideration manufacturers' recommendations and our own testing, some values that we have found to provide well-balanced results on multiple network cards using OS X versions 10.8+ are the following. Bear in mind that as with any manual kernel tuning, there is the possibility of its becoming outdated in future versions of OS X and your network card's firmware.
kern.ipc.maxsockbuf=16777216
net.inet.tcp.sendspace=4194304
net.inet.tcp.recvspace=4194304
net.inet.tcp.maxseg_unacked=32
net.inet.tcp.delayed_ack=2
Please also note that for those networks relying heavily on SMB sharing, especially when over SMB1, setting net.inet.tcp.delayed_ack
to 2
or 0
is especially recommended.
Windows 8.1
As tuning specific settings in the Windows kernel can be somewhat difficult compared to the more streamlined approach of OS X, many 10GbE manufacturers bundle certain optimizations for their cards in the driver install or as part of a special GUI manager. Given the difficulty of altering certain kernel settings in Windows manually, a more in-depth explanation of 10GbE performance tuning is outside of the scope of this article. If you are interested in further configuring your 10GbE interface in Windows, you may find this documentation from Microsoft helpful.
As an added note, we have found that in certain cases disabling Windows Firewall can improve the latency and overall speed of a direct 10GbE-10GbE link. This is due to the increased strain put upon the workstation by having a software filter monitor all incoming and outgoing connections. As with all important network settings, however, you should check with your local system administrator before experimenting with it.
Cabling and transceivers
In addition to making sure your network card is properly configured and that your MTU is set to the same value across machines, it is important to make sure that your Ethernet cabling is able to support 10 gigabits of bandwidth over Ethernet. Currently, we recommend Category 6A (CAT6A) or duplex fiber optic cabling be used for all 10GbE network purposes.
More detailed recommendations on cabling and transceivers can be found in our Getting started with SNS products article.
EVO-side options
As well as making sure that your workstation and local network are configured properly, there are two more EVO-side options which should be checked to ensure good 10GbE performance over NAS.
If your workflow makes heavy use of SMB and/or AFP, navigate to the EVO web interface's System pane. From there click Advanced, and scroll down to the box reading "NAS Tuning." Please verify that the "Enable SMB2" and "Optimize concurrent NAS operations" (* see note) options are both checked.
* Please note that the "Optimize concurrent NAS operations" option was introduced in v.5.6, but superseded in v.5.7 and is now unnecessary in most cases. The option is removed from the GUI in later versions.
Network switches
In order to more easily determine and consistently achieve maximum throughput over 10GbE, it is recommended that each of your 10GbE workstations be directly connected to one of EVO's 10GbE ports without a network switch in-between. While we understand that network switches are often unavoidable, especially for complex networking environments, the use of a switch does have the potential to both cause issues as well as make issues more difficult to diagnose.
If you have followed all the recommendations in this article and are still experiencing suboptimal 10GbE performance in a switched environment, we strongly recommend temporarily bypassing any switches and attempting a direct connection from the 10GbE card in your workstation to EVO to verify that the problem is not switch-related.
Latest metrics
Summary: It can generally be expected that a single workstation directly connected to an EVO via 10GbE, using a RAID-5 on a typical eight disk pool, will yield speeds of ~600MB/s writes and ~500MB/s reads over iSCSI, with NAS rates falling somewhere below this depending on a number of things like the file-sharing protocol (e.g. SMB1, SMB2, AFP), OS/version, NIC/driver, and benchmark test or individual application being used.
Keep in mind that in some cases your 10GbE speeds will decrease if the adapter is contained in an external Thunderbolt or USB chassis instead of being installed directly into a PCIe slot on the workstation's motherboard. If at all possible, we recommend plugging the card directly into the PCIe slot on the host workstation.
For more details: Current EVO customers can view our most up-to-date metrics of 10-gigabit Ethernet performance with various protocols and benchmark tests in our guide to expected 10GbE throughput.
As we regularly test 10GbE performance with new versions of EVO, OS X and Windows, and different hardware revisions, these numbers will always reflect the most recent performance rates we have seen over 10-gigabit Ethernet.
Helping us diagnose a 10GbE performance issue
The occasional speed issues associated with 10GbE have in part led to the proliferation of a variety of bandwidth metrics, benchmark utilities, and testing tools for measuring 10GbE throughput. Unfortunately, none of these have consolidated into a simple and reliable way to determine what a given workstation and network configuration's 10GbE transferal rate ought to be. Due to the lack of a universal standard for testing 10GbE throughput, speeds cited by card manufacturers can occasionally differ greatly from those seen after being deployed on-site.
Not only this, but the results of benchmarking utilities can also vary widely across different 10GbE cards, operating systems, minor versions of operating systems, and in certain cases even between different versions of the same testing utility. In order to ensure that our customers receive the best possible 10GbE performance from EVO, our products undergo a rigorous suite of tests under different network and host configurations in order to develop a recommended environment for optimum 10GbE performance.
Yet if you are still experiencing 10GbE performance issues after reading through our suggestions, please consider sending us the results of the AJA System Test, a user-friendly tool we have found to provide relatively consistent results across different file-sharing protocols and operating systems. To use AJA System Test to exercise its highest level of throughput, we recommend setting the video frame size to "4096x2160 10-bit RGB" and using a file size of 16 GB. If your share does not initially appear in the volume list on OS X, please be sure that "Enable network volumes" is selected in the Preferences pane.
A typical setup and test session of AJA should look something like this:
For reference, these were our speeds writing via iSCSI on an unconfigured Mac Pro 6,1 using OS X 10.9.5 with a Myricom Myri10GbE 1.3.3.
The expected throughput between any given workstation and share is also affected by the network protocol being used. Even when using 10GbE, one can expect a direct iSCSI connection to almost always outperform one occurring over an AFP or SMB NAS share.
Contacting us
Studio Network Solutions is committed to providing a successful and reliable platform for our customers' projects. If you are finding that even after testing your 10GbE speeds are not comparable to ours, either in these screenshots or our latest metrics, please consider opening a support ticket with us a description of the issue and your network configuration.
As always, we can be reached through the Support Center page on our website.