PROXMOX & OpenMediaVault 802.3ad against CISCO 3560g

PROXMOX & OpenMediaVault 802.3ad against CISCO 3560g

Hi again people!

This times of summer peace and tranquility are ideal to take the time to carefully rework parked labs and tests, and, one of the ones I wanted to solve once and for ever (since I live with similar production scenarios) is a one about setting up interface bonding in 'Debianish' environments such as PROXMOX hypervisor and OpenMediaVault iSCSI NAS appliances againt CISCO Catalyst 3560g switches.

While there exists literature on the matter, the fact is that with the event of 10Gb copper ethernet seems that interest on bonding interfaces is losing terrain, so receipts found on the Internet are becoming not only scarce, but also outdated.
To make things worse, the changes in ifenslave2.6 are both subtle and misleading between major Debian releases, there is little information on how overall the thing works, and at the end you may found yourself with a somehow working setup that actually neither works well nor complains about what is going wrong.

So, Here's a guide on how to setup a CISCO Catalyst 3560g, to bond four gigabit ethernet links against an 'old', Debian based, PROXMOX 3.4 Hypervisor; and another four more links (for a total of 8 links in 2 channels) against a Debian based OpenMediaVault NAS install.

The idea is to have decent bandwith and fault tolerance between the hypervisor itself, its VMs, and the NAS.
So, usage of iSCSI target for VM Images, together with NFS/CIFS services for the VMs, being possible in small scenarios, at a fraction of the cost of a 10Gbps hardware deployment.

Catalyst first

I'm going to start with the switch configuration.
I'm going to copy/paste from currently working and tested running config, but I also will show alternate options you may try to get optimal performance.

First, consider the load-balancing algorithm that best suits your needs.
For a correct selection, you may have a read on CISCO documentation on what are the differences in traffic handling for every option.
Although old, this CISCO IOS 12.2 etherchannel doc may help, but sure googling around there is plenty of information on the matter.

6 Are the possible options to choose on my swhitch firmware.

c3560g(config)#port-channel load-balance ?
  dst-ip       Dst IP Addr
  dst-mac      Dst Mac Addr
  src-dst-ip   Src XOR Dst IP Addr
  src-dst-mac  Src XOR Dst Mac Addr
  src-ip       Src IP Addr
  src-mac      Src Mac Addr

 
By default, if no configuration is set, any etherchannel bonding uses 'src-mac'
In my scenario, I got better usage of links by using 'dst-ip' setting.
Note the command is global-wide, so it applies to all etherchannels in the switch.
So I issued the following command in main configuration mode:

port-channel load-balance dst-ip

 
This should be taken into account when configuring 802.3ad settings on the Debian side, although I have the channels working anyways.

So now here I put the interesting part of the configuration regarding gigabit interfaces assigned to the channel groups, and the configuration of the channel group themselves.
I'm creating to Port-channels, numbered Po2 and Po3, where Po2 is made from interfaces gi21 to gi24, and Po3 from gi17 to gi20:

First the excerpt related with the gigabit physical interfaces:

...

!         
interface GigabitEthernet0/17
 description port-channel3 assigned interface
 switchport access vlan 100
 switchport mode access
 no cdp enable
 channel-group 3 mode active
 spanning-tree portfast
!         
interface GigabitEthernet0/18
 description port-channel3 assigned interface
 switchport access vlan 100
 switchport mode access
 no cdp enable
 channel-group 3 mode active
 spanning-tree portfast
!         
interface GigabitEthernet0/19
 description port-channel3 assigned interface
 switchport access vlan 100
 switchport mode access
 no cdp enable
 channel-group 3 mode active
 spanning-tree portfast
!         
interface GigabitEthernet0/20
 description port-channel3 assigned interface
 switchport access vlan 100
 switchport mode access
 no cdp enable
 channel-group 3 mode active
 spanning-tree portfast
!         
interface GigabitEthernet0/21
 description port-channel2 assigned interface
 switchport access vlan 100
 switchport mode access
 no cdp enable
 channel-group 2 mode active
 spanning-tree portfast
!         
interface GigabitEthernet0/22
 description port-channel2 assigned interface
 switchport access vlan 100
 switchport mode access
 no cdp enable
 channel-group 2 mode active
 spanning-tree portfast
!         
interface GigabitEthernet0/23
 description port-channel2 assigned interface
 switchport access vlan 100
 switchport mode access
 no cdp enable
 channel-group 2 mode active
 spanning-tree portfast
!         
interface GigabitEthernet0/24
 description port-channel2 assigned interface
 switchport access vlan 100
 switchport mode access
 no cdp enable
 channel-group 2 mode active
 spanning-tree portfast
!

...

 
And now let's see how the configuration of the virtual Port-channels looks like:

...

interface Port-channel2
 description Link to OpenMediaVault NAS
 switchport access vlan 100
 switchport mode access
 ip arp inspection limit rate 20 burst interval 2
 spanning-tree portfast
!         
interface Port-channel3
 description Link to PROXMOX Hypervisor
 switchport access vlan 100
 switchport mode access
 ip arp inspection limit rate 20 burst interval 2
 spanning-tree portfast
!

...

 

PROXMOX setup

I know PROXMOX network setup is easy to to setup from GUI, or even more, recommended instead using manual edits of the network configuration file.
Moreover, PROXMOX does include and has the ability to manage from GUI Open vSwitch devices instead of classic kernel ones.
So, sticking to GUI is a very very good idea specially in from recent versions onwards... but sometimes, for whatever reasons you may need, or even want to do things 'the old school' way.

Here's a stripped down excerpt of my /etc/network/interfaces file.

  • The four bonded interfaces are eth5 to eth8, which results on bond0 interface.

  • The bond0 interface is, in turn, bridged to the main vmbr0 interface.

  • There is an aditional ethernet interface, eth0, for management purposes.

  • Note all there is legacy linux kernel bridging and bonding, no Open vSwitch at all.

    ...

    MANAGEMENT INTERFACE

    auto eth0
    iface eth0 inet static
    address 192.168.2.2
    netmask 255.255.255.0
    broadcast 192.168.2.255
    network 192.168.2.0

    BONDED INTERFACES

    auto eth5
    allow-hotplug eth5
    iface eth5 inet manual
    bond-master bond0

    auto eth6
    allow-hotplug eth6
    iface eth6 inet manual
    bond-master bond0

    auto eth7
    allow-hotplug eth7
    iface eth7 inet manual
    bond-master bond0

    auto eth8
    allow-hotplug eth8
    iface eth8 inet manual
    bond-master bond0

    802.3ad BONDING INTERFACE

    auto bond0
    iface bond0 inet manual
    bond_mode 802.3ad
    bond_miimon 100
    bond-downdelay 200
    bond-updelay 200
    bond-lacp-rate 1
    bond-xmit-hash-policy layer3+4
    txqueuelen 10000
    bond-slaves eth5 eth6 eth7 eth8

    THE MAIN BRIDGE

    auto vmbr0
    iface vmbr0 inet static
    address 192.168.1.2
    netmask 255.255.255.0
    gateway 192.168.1.1
    broadcast 192.168.1.255
    bridge_ports bond0
    bridge_stp off
    bridge_fd 0
    network 192.168.1.0

    ...

 
This works for me and has the important core things right in place, but keep an eye to documentation about ifenslave, since it may be tricky and hard to tweek, and behaviour may change depending ot NIC cards, switch, kernel, etc, etc...

Now, although I have read many times It is no longer necessary, I post the content of the file /etc/modprobe.d/bonding.conf which I have, and sure doesn't hurt to have.
You may try whithout this first, and if something goes wrong give it a try

alias bond0 bonding
    options bonding mode=4 miimon=100 lacp_rate=1

 
Also, my /etc/modules file explicity declares the loading of the kernel modules 'bonding' and 'mii'.
Again, this is what I have, and probably is no longer needed

 

OpenMediaVault setup

This one is very similar to the PROXMOX PVE companion, being Debian based.
Basically the change is that the interface numbers vary and the resulting bond0 interfaces is not bridged, so it gets layer3 configuration.
Otherwise it is the same.

...

# MANAGEMENT INTERFACE
allow hotplug eth0
auto eth0
iface eth0 inet static
    address 192.168.2.3
    netmask 255.255.255.0
    network 192.168.2.0
	broadcast 192.168.2.255    

# BONDED INTERFACES
auto eth3
allow hotplug eth3
iface eth3 inet manual
	bond-master bond0

auto eth4
allow hotplug eth4
iface eth4 inet manual
	bond-master bond0

auto eth7
allow hotplug eth7
iface eth7 inet manual
	bond-master bond0

auto eth8
allow hotplug eth8
iface eth8 inet manual
	bond-master bond0

# 802.3ad BONDING INTERFACE
auto bond0
iface bond0 inet static
	address 192.168.1.3
	netmask 255.255.255.0
	gateway 192.168.1.1
	bond-mode 802.3ad
	bond-miimon 100
	bond-downdelay 200
	bond-updelay 200
	bond-lacp-rate 1
	bond-xmit-hash-policy layer3+4
	txqueuelen 10000
	dns-nameservers 8.8.8.8
	dns-search example.com
    bond-slaves eth3 eth4 eth7 eth8

...

 
In this installation of OpenMediaVault bonding works without needing to add anything to /etc/modules.
On the other hand I have created the file /etc/modprobe.d/bonding.conf with the exact same contents than before, and it works for me, although both of them are redundant with the configuration present in the interfaces file

 

Testing

So once you have setup the whole thing it should up to put cables in place and restart networking or rebooting.

Checking the status of the channels in the switch can be done using the show etherchannel command to get anoverview:

c3560g#sh etherchannel         
		Channel-group listing: 
		----------------------
...    

Group: 2 
----------
Group state = L2 
Ports: 4   Maxports = 16
Port-channels: 1 Max Port-channels = 16
Protocol:   LACP
Minimum Links: 0

Group: 3 
----------
Group state = L2 
Ports: 4   Maxports = 16
Port-channels: 1 Max Port-channels = 16
Protocol:   LACP
Minimum Links: 0

...

 
But you may have to have a detailed view of the status of one of the channels, to compare with the reported status of the bonding on the Debianist system, using the show etherchannel X detail (where X is the number of the port-channel interface).

c3560g#sh etherchannel 2 detail
Group state = L2 
Ports: 4   Maxports = 16
Port-channels: 1 Max Port-channels = 16
Protocol:   LACP
Minimum Links: 0
    Ports in the group:
    -------------------
Port: Gi0/21
------------

Port state    = Up Mstr Assoc In-Bndl 
Channel group = 2           Mode = Active          Gcchange = -
Port-channel  = Po2         GC   =   -             Pseudo port-channel = Po2
Port index    = 0           Load = 0x00            Protocol =   LACP

Flags:  S - Device is sending Slow LACPDUs   F - Device is sending fast LACPDUs.
        A - Device is in active mode.        P - Device is in passive mode.

Local information:
                            LACP port     Admin     Oper    Port        Port
Port      Flags   State     Priority      Key       Key     Number      State
Gi0/21    SA      bndl      32768         0x2       0x2     0x15        0x3D  

Partner's information:

                  LACP port                        Admin  Oper   Port    Port
Port      Flags   Priority  Dev ID          Age    key    Key    Number  State
Gi0/21    SA      255       0015.17de.07e4  19s    0x0    0x11   0x1     0x3D  

Age of the port in the current state: 1d:08h:01m:20s

Port: Gi0/22
------------

Port state    = Up Mstr Assoc In-Bndl 
Channel group = 2           Mode = Active          Gcchange = -
Port-channel  = Po2         GC   =   -             Pseudo port-channel = Po2
Port index    = 0           Load = 0x00            Protocol =   LACP
          
Flags:  S - Device is sending Slow LACPDUs   F - Device is sending fast LACPDUs.
        A - Device is in active mode.        P - Device is in passive mode.
          
Local information:
                            LACP port     Admin     Oper    Port        Port
Port      Flags   State     Priority      Key       Key     Number      State
Gi0/22    SA      bndl      32768         0x2       0x2     0x16        0x3D  
          
Partner's information:
          
                  LACP port                        Admin  Oper   Port    Port
Port      Flags   Priority  Dev ID          Age    key    Key    Number  State
Gi0/22    SA      255       0015.17de.07e4  19s    0x0    0x11   0x2     0x3D  
          
Age of the port in the current state: 1d:08h:01m:18s
          
Port: Gi0/23
------------
          
Port state    = Up Mstr Assoc In-Bndl 
Channel group = 2           Mode = Active          Gcchange = -
Port-channel  = Po2         GC   =   -             Pseudo port-channel = Po2
Port index    = 0           Load = 0x00            Protocol =   LACP
          
Flags:  S - Device is sending Slow LACPDUs   F - Device is sending fast LACPDUs.
        A - Device is in active mode.        P - Device is in passive mode.
          
Local information:
                            LACP port     Admin     Oper    Port        Port
Port      Flags   State     Priority      Key       Key     Number      State
Gi0/23    SA      bndl      32768         0x2       0x2     0x17        0x3D  
          
Partner's information:
          
                  LACP port                        Admin  Oper   Port    Port
Port      Flags   Priority  Dev ID          Age    key    Key    Number  State
Gi0/23    SA      255       0015.17de.07e4  14s    0x0    0x11   0x3     0x3D  
          
Age of the port in the current state: 1d:08h:01m:15s
          
Port: Gi0/24
------------
          
Port state    = Up Mstr Assoc In-Bndl 
Channel group = 2           Mode = Active          Gcchange = -
Port-channel  = Po2         GC   =   -             Pseudo port-channel = Po2
Port index    = 0           Load = 0x00            Protocol =   LACP
          
Flags:  S - Device is sending Slow LACPDUs   F - Device is sending fast LACPDUs.
        A - Device is in active mode.        P - Device is in passive mode.
          
Local information:
                            LACP port     Admin     Oper    Port        Port
Port      Flags   State     Priority      Key       Key     Number      State
Gi0/24    SA      bndl      32768         0x2       0x2     0x18        0x3D  
          
Partner's information:
          
                  LACP port                        Admin  Oper   Port    Port
Port      Flags   Priority  Dev ID          Age    key    Key    Number  State
Gi0/24    SA      255       0015.17de.07e4  19s    0x0    0x11   0x4     0x3D  
          
Age of the port in the current state: 1d:08h:01m:16s
          
                Port-channels in the group: 
                ---------------------------
          
Port-channel: Po2    (Primary Aggregator)
          
------------
          
Age of the Port-channel   = 51d:08h:33m:11s
Logical slot/port   = 2/2          Number of ports = 4
HotStandBy port = null 
Port state          = Port-channel Ag-Inuse 
Protocol            =   LACP
Port security       = Disabled
          
Ports in the Port-channel: 
          
Index   Load   Port     EC state        No of bits
------+------+------+------------------+-----------
  0     00     Gi0/21   Active             0
  0     00     Gi0/22   Active             0
  0     00     Gi0/23   Active             0
  0     00     Gi0/24   Active             0
          
Time since last port bundled:    1d:08h:01m:18s    Gi0/24

 
Conversely, checking the bonding status on the servers is the same to both systems, with the current status of the bond present in /proc/net/bonding/bond0.
Here's what it should look like:

root@storage:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer3+4 (1)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200

802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
	Aggregator ID: 3
	Number of ports: 4
	Actor Key: 17
	Partner Key: 2
	Partner Mac Address: 00:1d:71:97:b4:00

Slave Interface: eth3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:15:17:de:07:e4
Aggregator ID: 3
Slave queue ID: 0

Slave Interface: eth4
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:15:17:de:07:e5
Aggregator ID: 3
Slave queue ID: 0

Slave Interface: eth8
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:13:3b:0f:95:65
Aggregator ID: 3
Slave queue ID: 0

Slave Interface: eth7
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:13:3b:0f:95:66
Aggregator ID: 3
Slave queue ID: 0

 
So... hope this one my be useful to someone!
Happy summer!!!!