sviko.com

Aquantia AQTION AQC107 vs INTEL X550-T2 - comparing 10G interfaces

The young company Aquantia 2-3 years ago was not known to anyone, and today its network controllers have become a pass to the world of 10-Gigabit networks for thousands of gamers around the world. This brand came to the market with an interesting idea: “not everyone has category 6 cables that can pull a speed of 10 Gigabits per second, so where expensive network cards can either 10G or 1G, ours connect to 2.5 G and 5G, and they are cheap and almost not heated. Yes, we will not tell you that for the connection to 2.5 G/5G on the opposite side of the wire must also be a chip that supports the standard NBASE-T, but who cares at all? We have 10G network card for$ 150 retail!”.

Sviko_test_1

SHOW MORE

Aquantia-logo

MORE PHOTOS

Aquantia-logo2 Aquantia-logo3 Aquantia-logo4 Aquantia-logo5 Aquantia-logo6 Aquantia-logo7

Of course, this product is designed primarily for gaming enthusiasts, but because of its low cost, aquantia chips have become widespread in the desktop NAS-Ah SMB segment, and workstations Dell Optiplex 7050, they chose Nvidia, Asus, Gigabyte, ASRock and many others.

On its website, the company is proud to write that its network controllers are designed for client computers of the Enterprise segment, that is, just for those workstations that need fast file sharing with the server. We recently tested the ASRock Taichi X470 Ultimate gaming motherboard, which had the Aquantia AQC107 network controller installed, and we somehow bypassed the network performance of the device. In this article, I correct this omission to show you whether the positions of Intel and Mellanox in the 10G segment of networks have shaken, or whether these giants can continue to sleep peacefully.

Test bench configuration:

Server:

  • Intel Xeon E5-2603 v4
  • 32 Gb RAM ECC DDR4-2400 RDIMM
  • Motherboard Asrock Rack EPC612D4U-2T8R
  • VMware ESXi 6.7
  • NIC1 – Intel X540-T2 в режиме PCIe Passthrough
  • HDDs – 3 x 10k SAS Seagate Savvio 10K6. In our test the stand is set up in such a way that they do not participate in testing and do not affect the speed
  • OS:
    • FreeNAS 11.2 for Samba (CIFS) access (16 Gb RAM для LARC), RAID Z1, LZ4 off, Dedupe off.
    • Windows Server 2016 (8Gb iSCSI LUN на RAMDISK)

Client:

  • AMD Ryzen 5 1600
  • 16 Gb RAM DDR4 3000 MHz
  • ASRock X470 Taichi Ultimate
  • NIC: Aquantia AQtion AQC 107
  • NIC2: Intel X550-T2
  • Software:
    • Windows 10 x64 Enterprise
    • IOmeter for iSCSI
    • CrystalDisk Benchmark for CIFS

On the server the 10-Gigabit Intel X540-T2 card which in the Pci Passthrough mode was presented to the guest operating system under its native drivers is installed. On the client – 2 network cards: built-in Aquantia AQC107 and discrete Intel X550-T2. They in turn were connected by the same cable to the same port of the server whose software was configured so that not to use a disk subsystem, and to work from memory.

Intel_x550_t2_logo1 Intel_x550_t2_logo2

Intel network card designed for virtualization. It supports SR-IOV hardware resource separation mode, has CPU load reduction algorithms when used in virtual environments, and can be configured at the driver level to combine ports in 802.3 ad mode. In the controller from Aquantia there is nothing of it, even there is no fashionable software for acceleration of games which some producers of premium motherboards flaunt. Even hardware monitoring to look as there the network controller lives, in drivers under Windows isn’t present, and at Intel it is.

To tell you the truth, I expected us to run into gigabytes per second and count the crumbs, look at the hundredths of a percent in the speed difference, but I was wrong. In modern 10GBe server network cards, few people pay attention to the built-in Offload engines, believing that the processor will cope with any traffic of iSCSI and NFS, but the basic principle of unloading the TCP packets themselves is used in all boards, even in the cheapest integrated Intel i350. No Intel X550-T2, no AQtion AQC-107 not stated no offload iSCSI or NFS, and the specs of the chips have the following algorithms offload and:

  • Aquantia AQC107: MSI, MSI-X, LS, RS, IPv4/IPv6 checksums
  • Intel X550-T2: MSI, MSI-X, Tx/Rx, IPSec, LSO IPv4/IPv6 checksums.

But of course, no one talks about the performance of the processor of the network card, and in fact everything is very, very simple: at speeds above 1 Gbps, the number of network packets that form file traffic already matters. The more packets transmitted per second, the heavier the network card, and the more powerful its processor should be. In part, the network card capacity problem is solved by supporting large Jumbo Frame frames, which is disabled by default on any network equipment for the sake of compatibility. We increase the size of the packet – and the same amount of traffic we reduce their number, making it easier for the network interface and increasing its speed. If you do not configure the size of the package, then in Windows, Linux and FreeBSD it is 1500 bytes, and this is a huge load on the network card in the file server mode.

Test results

The first test is with default operating system settings (MTU=1500). We will use a simple and intuitive Crystaldisk Benchmark with a minimum text file of 50 MB, which will fit completely into the FreeNAS cache.

Samba-aquantia_mtu1500a Samba-intel_mtu1500

Sviko_test_1 Sviko_test_2

The difference in reading speed is enormous. Due to the faster processor and good drivers Intel X550-T2 does not leave any chance of little-known integrated chip.

ISCSI tests confirm what you saw – if your network MTU parameter is not allowed to change for compatibility reasons, no Aquantia will replace Intel tested for years. Expensive server X550-T2 gives almost three times the advantage in speed, but actually for 10G these figures – not speed, and a shame.

Increasing MTU size

Therefore, increase the MTU value on all three network cards to 9014 bytes first. By the way, this value is the maximum that allow you to set the Intel drivers for Windows 10.

Zbench-9k-aq Zbench-9k-intel

The situation doesn’t just straighten out, but allows the dark horse to get ahead on read operations with both MTU values.

Cpu_load_mtu9k Rnd_access_4k_mtu9k Rnd_access_1024_mtu9k

In the iSCSI test, we were able to put the MTU only at the level of 9014, because of the limitations of the driver under Windows, and in this test, again won the chip from Aquantia.

As for the CPU load, in both cases, and the values of this parameter, and the difference between the chips is not significant. Let me remind you that for the client we used desktop AMD Ryzen 5 1600 with 12 threads.

Price issue

On average, a 1-port network card on the Aquantia AQC107 chip costs $ 155, you can look for cards manufactured by Asus or GIgabyte, created for gamers. Intel X550-T1 with a single network port is about 300$, the dual-port – in the area of 330-350$. On Ebay you can buy X550 twice cheaper, but there is a high risk of running into a fake, and how to distinguish the original network card from Intel re-soldered by Chinese craftsmen, read in our article.

Support from operating systems

Intel has a much better deal with drivers. Network cards X550-T2 are so expensive, including because out of the box are supported by vSphere 6.7 hypervisors, all Linux-AMI and any other software. Aquantia requires manual installation of drivers that need to be downloaded from the manufacturer’s website, and the support of ESXi 6.7 is out of the question.

Conclusions

We have looked at a typical situation in which the storage runs in a virtual environment running FreeBSD or Windows and provides block or file access to the workstation. On the part of the client machine, Aquantia AQC107 wins in speed and price and the only thing that is required from the system administrator is to set the MTU to 9K or even more.

At the same time, for the server part, network cards such as Intel X550/X540/X710 retain two significant advantages that Aquantia does not have – it is support from any operating system and SR-IOV technology, which, in General, is in little demand.

In any case, looking at the results of disk operations with a large queue of commands, all doubts disappear – AQTion AQC107 – the winner of our comparison.

P.S. I can’t say why in our tests we didn’t reach the maximum bandwidth of 10 Gigabit channel. Theoretically, the “memory-network-memory” test should have gone above 1 GB/s, but we barely reached 700 megabytes per second. Perhaps this has something to do overhead of the hypervisor.