Link Aggregation between Proxmox and a Synology NAS

I’ve been using Synology DSM as my NAS operating system of choice for some time, hosted on a HP N54L Microserver with 4 x 3TB drives and a 128GB SSD. This performs well and I’ve been leveraging the iSCSI and NFS functionality in my home lab, setting up SQL Database storage and Windows Server Failover clusters.

Having Proxmox and Synology hooked up by a single gigabit connection was giving real world disk performance of around 100MB/s, near enough maxing out the connection. For Synology to have enough throughput to be the storage backend for virtual machines, this would not cut it, so I installed an Intel PRO/1000 PT Quad in each machine giving an additional 4 gigabit network ports.

Proxmox itself supports network bonding modes of most kinds, including the one of most interesting, balance-rr (mode 0) which will effectively leverage multiple network connections to increase available bandwidth rather then provide fault tolerance or load balancing.

I could easily create a 802.3ad link aggregated connection between each, which worked perfectly, but serves no purpose in a directly connected environment other than providing redundancy as the hashing algorithms for load balancing will try and route all traffic from one MAC address to another via the same network port, so I set out to investigate whether the Synology could support balance-rr (mode 0) bonding which sends packets out across all available interfaces in succession, increasing the throughput.

Note: You’ll need to have already set up a network bond in both Synology and Proxmox for this to work, I won’t cover this here as it’s simple on both platforms. I’ll be talking about is how we can enable the mode required for the highest performance.

The simple answer is no, Synology will not let you configure this through the web interface, it wants to set up an 802.3ad LACP connection, or an active-passive bond (failover is in mind rather than performance). I found however that provided you’re not scared of a bit of config file hacking (well you probably wouldn’t be using Proxmox if you didn’t know your way around a linux shell and DSM is based on linux too) you can enable this mode and achieve the holy grail that is a high performance aggregated link.

Simply edit /etc/sysconfig/network-scripts/ifcfg-bond0 and change the following line:

BONDING_OPTS=”mode=4 use_carrier=1 miimon=100 updelay=100 lacp_rate=fast”

to

BONDING_OPTS=”mode=0 use_carrier=1 miimon=100 updelay=100″

Now, reboot your Synology NAS and enjoy the additional performance this brings.

For reference, here’s the output from ‘iperf’ performing a single transfer:

root@DiskStation:/# iperf -c 10.75.60.1 -N -P 1 -M 9000

WARNING: attempt to set TCP maximum segment size to 9000, but got 536

————————————————————

Client connecting to 10.75.60.1, TCP port 5001

TCP window size: 96.4 KByte (default)

————————————————————

[  3] local 10.75.60.2 port 37463 connected with 10.75.60.1 port 5001

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0-10.0 sec  3.40 GBytes  2.92 Gbits/sec

Not bad?!?

3 comments

  • Roger June 3, 2015

    Hi, Can I know the model number of your Synology Diskstation? I am doing some research to see if I could increase the network throughput of my DS1511+ and DS1515+. Currently with link aggregation of 2 x 1Gb ports, I am only getting about 350 Mb/sec and I am really puzzled by my iperf results.

    Reply
  • Roger Wang June 3, 2015

    I am sorry. I read it one more time and realized you had a custom built NAS using DSM software. (Maybe I should have gone that route too it is more flexible). Thank you.

    Reply
    • James Coleman-Powell August 16, 2015

      Sorry for the delayed reply. I’ve not used any custom packages or tweaks to achieve this, so as much as I’ve used a Custom Build (hardware), the principles should be the same.

      Reply

Post a reply

Copyright © James Coleman-Powell, 2016