Bananian

IPerf Throughput on Banana PRO

6 5424
Suman  
Hi,

I ran a comparative test between FreeNAS Server (PC hardware) and Bananian (Banana PRO) for Gigabit Ethernet throughput in the following Topology


                             GbE                                   GbE
IPerf Server  -----------------> Router ----------------------> IPerf Client
                             Cat 6                                 Cat 5e

The iPerf Server was either FreeNAS PC (Core i3-2100T, 8 GB DDR3-1600 Mhz, 1 x GbE, Asus H77-i Mobo) or the Banana PRO.
The iPerf Client was a Linux PC (Core i7-2600K, 1xGbE, 8 GB DDR3, Asdus P8Z68-V PRO/Gen 3)
The router is an Asus RT-AC68U

The theoretical max performance should be 125 MB/s (1000 mbps).

For FreeNAS, I got a very stable 117 MB/s throughput (variation of only 2 mbps across 10 runs)
For Banana PRO, my results vary from 94 MB/s to 106 MB/s (100 MB/s average). Around 98 mbps variation across the 10 runs.

Also noticed some variation if you change the client PCs. i used a 2007 Macbook, and got around 90 MB/s averge with BPro

I am not dissatisfied in any way, but is anyone able to get higher figures for iPerf (closer to the FreeNAS figures) ?

Any tuning options for GbE on Bananian for BPro ?

Some people say that the GbE on these small ARM based SBCs cannot operate at full gigabit speeds. Is that true ?


Regards
Suman

tkaiser  
Edited by tkaiser at Mon Apr 6, 2015 07:48

You should also test the other direction as well since results most probably differ. I've never seen iperf measurements exceeding 941-943 MBits/sec in the wild when using Gbit ethernet (it's part of my job to measure LAN topologies) and I was able to achieve that in one direction with a Banana Pi as well: http://forum.lemaker.org/forum.php?mod=viewthread&tid=12167

In the 'early days' of the Banana Pi the maximum network speeds were reported as being way slower (compare with eg. http://hardware-libre.fr/2014/06/raspberry-vs-banana-hardware-duel/). This was caused by inappropriate cpufreq settings and maybe driver hassles or maybe wrong GMAC initialisation as well (networking in Allwinner devices is totally different from x86 PCs where there's a dedicated NIC attached via PCIe). Please compare with http://linux-sunxi.org/Ethernet#GMAC)

There are ARM SoCs who are capable to saturate a GBit link easily (Marvell Kirkwood/Armada for example -- but they're normally more expensive than a Banana) and there are SoCs that have also Gbit Ethernet capabilites but their internal Gbit MAC/PHY implementation is limited internally to ~470 Mbits/sec: Freescale's i.MX6. But since they also feature PCIe you can add a PCIe NIC which is the case with eg. the Utilite Pro (comes with 2 Gbit NICs but one suffers from the ~470 MBits/sec limitation so you will never be able to use this device as GBit router with full speed)

When I made tests with iperf on Allwinner systems a few months ago (when I had exactly no clue what's different compared to the network hardware we normally use ) I realized that you will sometimes get fantastic iperf throughput values (especially when you used a kernel config where the second CPU core might jump in when testing) that do not have any correlation to real world performance since unfortunately when you get maximum throughput only when both CPU cores are 100% utilized then performance will drop in real world situations (or even with combined benchmarks that rely on both network and I/O throughput).

Another fact I had to learn the hard way: Since these SoCs are so slow and all networking stuff is done inside the SoC even the slightest CPU intensive background activity might influence benchmarks. So if you don't take care of that you will get results that are hard to interpret.

Suman  
Thanks for pointing this out. Their is indeed a difference in throughput between directions:

If I reverse the perf client and server nodes in the above test topology, the throughput for FreeNAS falls from 117 MB/s to consistent 95 MB/s (19% drop)
For the Banana PRO, the same trend is observed and the throughput falls to from 100 MB/s to 64 MB/s (36% drop)

What is the reason for such asymmetric result ?

The drop in Linux ARM is almost twice as large as that on FreeBSD based FreeNAS. Is the differing networking stack the reason ?

tkaiser  
Edited by tkaiser at Tue Apr 7, 2015 04:33
Suman replied at Mon Apr 6, 2015 10:07
What is the reason for such asymmetric result ?


If the hardware itself is capable then it's most of the times something like window scaling options and TCP/IP stack (tuning). On slow ARM boards the problem seems to be the board itself or current driver situation. You should've a look at CPU utilisation when running your tests. And compare what you get if you assign all eth0 IRQs to CPU 1 using
  1. echo 2 >/proc/irq/$(awk -F":" '/eth0/ {print $1}' </proc/interrupts)/smp_affinity
Copy the Code
and using different window sizes on the machine initiating the iperf test.

Unfortunately you won't detect every potential performance showstoppers even with iperf. For example OS X' TCP/IP stack has problems with many network stacks of other operating systems (eg. recent linux kernels, Solaris 10 and above). While you might get iperf values that look promising (935 Mbits/sec or above) performance on higher layers (AFP comparable to SMB/CIFS in the windows world) is way too slow. The reason is this client's TCP/IP parameter that still defaults to 3: net.inet.tcp.delayed_ack

So unless you set
  1. net.inet.tcp.delayed_ack=2
Copy the Code
in /etc/sysctl.conf you will get nice iperf results but 'real world' performance sucks. That's the reason why I always test 3 things when it's about to get real client/server throughput or to nail down performance problems: local I/O performance, 'raw' network performance using iperf/netperf and both combined using HELIOS' Lantest.

Suman  
Has any attempt being made to use Jumbo Frame support (MTU >> the usual 1500 bytes) on Bananian and whether it can give any better iPerf throughput ?

I tried a 4000+ and 9000+ setting to match my Windows PC client and on restarting the B-Pro's network interfaces  did not come up at all in bothe the cases.

tkaiser  
Suman replied at Sat Apr 11, 2015 21:56
Has any attempt being made to use Jumbo Frame support (MTU >> the usual 1500 bytes) on Bananian and  ...

3838 seems to be the maximum. But while it helped with iperf throughput no 'real world' performance increases happened. LanTest results were even slower.

pauljp  
Hi there, I am trying to use iperf on Banana pro for my Wi-Fi research. Could you advise a best OS that can provide higher throughput.
How was your result with Banana pro?
Thanks
PaulJP

You have to log in before you can reply Login | Sign Up

Points Rules