Bonding is a sensible solution if the throughput in a high performance network only needs to be doubled or tripled.
Often an update to the next level of Ethernet requires upgrading the network components, too, which to a certain degree is hereby avoidable.
Especially in high performance computing the existing hardware can further be used if the network interconnections between nodes are the limiting factor.
In addition to increasing bandwidth, includes a simple network link failover solution. In connection oriented scenarios with a large number of clients a bonding of 10 x 1 Gbit/s interfaces offers a good alternative to a single 10Gbit/s card. If bonding mode 5 or 6 is set, the throughput is identical to a single 10 Gbit/s interface.
Nevertheless, if the ten server connections require the installation of a new switch, the upgrade to 10 Gbit/s is considerable.
Especially for a lower number of aggregated links, there the required port is often already available on the installed network equipment.
A larger number of connections requires a higher amount of energy which has to be considered as well as the expenses for the network equipment.
The expenses for the server equipment used in the test setup is nearly the same for one 10GBit/s or 12 x 1 GBit/s ports.
Of course is the number of required PCIe slots for the 10 x 1 GBit/s higher than a single 10 GBit/s card. The introduced test setup needed three PCIe x4 slots.
As bonding creates its own virtual interface, processing the Ethernet packets passing through, it adds extra load to the CPU depending on a predefined mode.
For example, if setting mode 0, the additional CPU load is relatively high compared to other modes.
In all other modes the additional CPU load is mensurable but is not a limiting factor