This article includes basic information about 10 Gigabit Ethernet (10GbE), as well as configuration recommendations, expected throughput, and troubleshooting steps that can help our users achieve optimum results with their 10GbE-enabled EVO shared storage system.
While 10GbE has certainly improved upon the performance of gigabit Ethernet (GbE), it is a common misconception that it will always and everywhere provide ten times the speed of a typical gigabit connection. Though 10x is indeed the theoretical limit of the protocol, normal network environments and real-world workloads are rarely able to consistently achieve full bandwidth saturation. However, we have taken every precaution both in the design and testing of our EVOs prior to shipment to enable our customers to reach the maximum speeds possible in a conventional network environment.
Throughput numbers throughout this article only reflect the results of a single workstation writing to or reading from the EVO system, and are not indicative of the total throughput and speed an EVO is capable of delivering to multiple users.
It is also important to keep in mind that EVO is shared storage! Thus, it is tuned to provide optimal results for a number of concurrent users/workstations working at the same time. It is not necessarily tuned by default to appease a synthetic benchmark test running on a single computer.
Checklist and basic suggestions
Setting the MTU
To ensure the best performance from an EVO utilizing a 10GbE connection, there are a few settings that should first be enabled on both EVO and the host workstation. First, it is imperative that the maximum transmission unit (MTU) of all ports in the connection path (EVO, workstation, switch) be set to the same value. Though it is generally recommended that the 10GbE port's default MTU of 9000 be used, there is no significant difference in keeping this at the older value of 1500 as long as EVO, the workstation, and any switch in-between are set accordingly. To check the MTU currently set for your EVO's Ethernet interface, navigate to the EVO UI's Connectivity pane and scroll to the appropriate 10GbE port. After expanding the 10GbE port's drop-down menu, you should notice something like the following.
As you can see, this EVO currently has an MTU of 9000.
Now, we must verify that the 10GbE card is using an MTU of the same rate. In macOS, this can be done by navigating to the Network pane of System Preferences, clicking the appropriate network connection, then navigating to Advanced and Hardware. There you should find the MTU currently set for your 10GbE card. A bug in some versions of OS X requires the Speed option to be set to "autoselect" to achieve an MTU of 9000. If you find that System Preferences is only presenting you with a maximum MTU of 1500, your 10GbE card's drivers may not have been installed properly, or you may have selected the wrong network device.
In Windows, the host configuration's MTU can be verified by finding the appropriate network card in Device Manager, and then clicking Properties. (Some adapters may have their own management utilities/applications.) From this view you should see an option for "Jumbo MTUs." Be sure that if this is set to 9000 on EVO, it is also set in the field indicated here.
No matter which operating system your workstation uses, it is very important that this value match across EVO, all workstations, and all network switches. Mismatched values will require extra translation, resulting in slower and sometimes altogether broken communication between ports.
Changes made to the MTU speed in EVO will be applied immediately upon saving the page, while changes made to the network interface in both macOS and Windows will require reconnecting to the previously mounted iSCSI or NAS shares.
SMB with Mac
printf "[default]\nsigning_required=no\n" | sudo tee /etc/nsmb.conf >/dev/null
Hit Enter, then input the workstation password (the cursor will not move, but it will take the input).
Reboot the machine for the change to take effect.
Keep in mind that in some cases your 10GbE speeds will decrease if the adapter is contained in an external Thunderbolt or USB chassis instead of being installed directly into a PCIe slot on the workstation's motherboard. If at all possible, we recommend plugging the card directly into the PCIe slot on the host workstation. Some hardware does not allow for PCIe expansion, in which case an external chassis will be required.
Note that while there may be several Thunderbolt ports available, more than one port may share a Thunderbolt bus. A high speed device such as a 10GbE network adapter should not share a bus with any other device. Refer to your computer's documentation to see the bus layout. For example, here is the bus configuration for a Mac Pro 6,1:
Many 10GbE adapter manufacturers offer included optimizations with the driver install. Some do not, and may require editing of a text file. While we will be providing basic recommendations for these options as they affect 10GbE performance, keep in mind that manufacturers' recommendations change often, and the best place to look first for any hardware-specific performance tweaks is the release notes to the latest version of your 10GbE card's driver. It is also important to note that optimizing your network card for one protocol, such as iSCSI, may adversely affect the performance of another protocol like AFP. This is why some manufacturers provide their users a selection of drivers to choose from, each with a different intended use case and configuration.
In OS X and macOS, the primary way of applying 10GbE tunings is through the editing of the
/etc/sysctl.conf file. All options written to this file are relayed to the kernel, the heart of the operating system, and then applied to the appropriate hardware interface — in this case the 10GbE card. As these settings directly change how the kernel "talks" to the workstation hardware and the local network, it is imperative that they be applied carefully and with the permission of your network administrator.
Taking into consideration manufacturers' recommendations and our own testing, some values that we have found to provide well-balanced results on multiple network cards using OS X versions 10.8+ are the following. Bear in mind that as with any manual kernel tuning, there is the possibility of it becoming outdated in future versions of macOS and your network card's firmware.
As tuning-specific settings in the Windows kernel can be somewhat difficult compared to the more streamlined approach of the macOS, many 10GbE adapter manufacturers bundle certain optimizations for their cards in the driver install or as part of a special GUI manager. Given the difficulty of altering certain kernel settings in Windows manually, a more in-depth explanation of 10GbE performance tuning is outside of the scope of this article. If you are interested in further configuring your 10GbE interface in Windows, you may find this documentation from Microsoft helpful.
You should also confirm that the energy settings profile in your Control Panel is set to "High Performance" — this is strongly recommended by us and is generally a recommendation by NLE manufacturers as well.
As an added note, we have found that in certain cases disabling Windows Firewall can improve the latency and overall throughput of a direct 10GbE<->10GbE link. This is due to the increased strain put upon the workstation by having a software filter monitor all incoming and outgoing connections. As with all important network settings, however, you should check with your system administrator before experimenting with it.
Last, it is very possible that you will see comparatively higher 10GbE throughput using a Server version of Windows.
Cabling and transceivers
In addition to making sure your network card is properly configured and that your MTU is set to the same value across machines, it is important to make sure that your Ethernet cabling is able to support 10 gigabits of bandwidth over Ethernet. Currently, we recommend Category 6A (CAT6A) or duplex fiber optic cabling be used for all 10GbE network purposes.
More detailed recommendations on cabling and transceivers can be found in our Getting started with SNS products article.
In order to more easily determine and consistently achieve maximum throughput over 10GbE, it is recommended that all of your 10GbE workstations be directly connected to one of EVO's 10GbE ports without a network switch in between. While we understand that network switches are often unavoidable, especially for larger/complex networking environments, the use of a switch does have the potential to both cause issues as well as make issues more difficult to diagnose. EVO v.5.8 introduces a virtual switch mode, which can sometimes negate the need for an external switch altogether.
If you have followed all the recommendations in this article and are still experiencing suboptimal 10GbE performance in a switched environment, we strongly recommend temporarily bypassing any switches and attempting a direct connection from the 10GbE card in your workstation to EVO to verify that the problem is not switch-related.
Expected throughput, summarized
It can generally be expected that a single workstation, directly connected to an EVO via 10GbE, using a RAID-5 on a typical eight disk pool, and running a reliable disk benchmarking utility, will yield speeds of ~500MB/s. You may see significantly more (or less) than this average number depending on a number of things such as: the Ethernet-based protocol you are using (e.g. SMB1, SMB2, SMB3, AFP, iSCSI) for your workstation, your workstation hardware, workstation operating system, OS version, Ethernet adapter model, NIC driver, and benchmark test or individual application being used to measure throughput. Of course, your EVO model, disks, EVO OS version, and concurrent loads placed on the EVO system by other users will also affect the possible throughput for any given workstation.
Helping us diagnose a 10GbE performance issue
The occasional speed issues associated with 10GbE have in part led to the proliferation of a variety of bandwidth metrics, benchmark utilities, and testing tools for measuring 10GbE throughput. Unfortunately, none of these have consolidated into a simple and reliable way to determine what a given workstation and network configuration's 10GbE transferal rate ought to be. Due to the lack of a universal standard for testing 10GbE throughput, speeds cited by card manufacturers can occasionally differ greatly from those seen after being deployed.
Not only this, but the results of benchmarking utilities can also vary widely across different 10GbE cards, operating systems, minor versions of operating systems, and in certain cases even between different versions of the same testing utility. In order to ensure that our customers receive the best possible 10GbE performance from EVO, our products undergo a rigorous suite of tests under different network and host configurations in order to develop a recommended environment for optimum and appropriately balanced shared performance.
EVO Self Test tools (EVO-side testing)
As of EVO OS v.5.8, EVO offers a method for testing its built-in Ethernet ports, to ensure the hardware is performing as expected. You'll find the Self Test Tools page in the Troubleshooting section of the EVO interface.
In addition to displaying the real-time in/out metrics for each port, it also provides a loopback test utility.
To use the tool, connect each end of a cable to two EVO Ethernet ports, select the connected ports and the duration of the test, and then click Start. (Interfaces cannot be part of a link aggregation if they are to be tested by this tool.)
Once the test completes, ensure pop-ups are allowed for EVO in your browser, and click the "Last test result" link to view the stats. If you are using a network switch we advise running this tool in different configurations to A/B test the differences between using a switch (i.e. looping through it) and bypassing the switch.
Workstation benchmark tool (Client-side testing)
If you are experiencing 10GbE performance issues after following these recommendations, please consider sending us the results of the AJA System Test, a user-friendly tool we have found to provide relatively consistent results across different file-sharing protocols and operating systems. To use AJA System Test we recommend setting the video frame size to "4096x2160 10-bit RGB" and using a minimum file size of 16 GB — these settings in the utility will not necessarily push the EVO or your workstations as much as possible, but we recommend them as a standard for comparison purposes.
A typical setup for an AJA System Test session should look like this:
Studio Network Solutions is committed to providing a successful and reliable platform for our customers' projects. If you experience slow performance issues and you have followed the recommendations in this article please open a support ticket with a description of the issue and your network configuration.
As always, we can be reached through the Support Center page on our website.