Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
10GB network adapter and BOND0 stuck at 1000Mbps
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Networking & Security
View previous topic :: View next topic  
Author Message
finalturismo
Apprentice
Apprentice


Joined: 06 Jan 2020
Posts: 159

PostPosted: Wed Oct 07, 2020 6:43 am    Post subject: 10GB network adapter and BOND0 stuck at 1000Mbps Reply with quote

So guys i got a free POE switch from work that has a few 10GB POE ports on it and 24 1GB ports on it. The stack-able SFP portss also work as standard active SFP ports (rare in alot of cases) Great off brand switch.

Anyway my problem iam having is that iam running a large bonding setup. I noticed that my transfer speeds are not what i want them to be. Both my bond0 and my 10GB SFP card are both running at 1GB network speed. I don't ever pass 120MB/s. I have tried with and without bonding. This leads me to believe that the auto negotiation is not working on both my bond0 or my 10GB sfp card. Any way i should go about solving this?

Also i was told that i should set my 10GB card as the primary adapter on my bond. I didn't see any information of how to do that on Gentoo.



Here is my current Ifconfig
Code:
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
        inet 77.77.77.3  netmask 255.255.255.0  broadcast 77.77.77.255
        inet6 fe80::2d47:b14b:dc48:ed85  prefixlen 64  scopeid 0x20<link>
        ether 34:17:eb:c3:1f:b7  txqueuelen 1000  (Ethernet)
        RX packets 61226700  bytes 89869963871 (83.6 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 21516500  bytes 1568500238 (1.4 GiB)
        TX errors 0  dropped 3 overruns 0  carrier 0  collisions 0

eth0: flags=6147<UP,BROADCAST,SLAVE,MULTICAST>  mtu 1500
        ether 34:17:eb:c3:1f:b7  txqueuelen 1000  (Ethernet)
        RX packets 30  bytes 3938 (3.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 44  bytes 5058 (4.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xcf800000-cf8fffff 

eth1: flags=6147<UP,BROADCAST,SLAVE,MULTICAST>  mtu 1500
        ether 34:17:eb:c3:1f:b7  txqueuelen 1000  (Ethernet)
        RX packets 249997  bytes 88504002 (84.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 234685  bytes 30881082 (29.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 20  memory 0xcfd00000-cfd20000 

eth2: flags=6147<UP,BROADCAST,SLAVE,MULTICAST>  mtu 1500
        ether 34:17:eb:c3:1f:b7  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth3: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 34:17:eb:c3:1f:b7  txqueuelen 1000  (Ethernet)
        RX packets 60976673  bytes 89781455931 (83.6 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 21281771  bytes 1537614098 (1.4 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 187  bytes 16738 (16.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 187  bytes 16738 (16.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
Back to top
View user's profile Send private message
Princess Nell
l33t
l33t


Joined: 15 Apr 2005
Posts: 827

PostPosted: Thu Oct 08, 2020 9:40 pm    Post subject: Reply with quote

Is that the Dell R720 from another thread ...

Can you provide some more detail on the network interface hw - lspci, type and model of card, number and type of ports etc. ethtool output for the slaves, contents of /proc/net/bonding/bond0.

Normal procedure if autonegotiation fails would be to lock it down on the switch - I assume it has a management interface? This may need to be combined with ethtool -s ... speed ... to do the same on the server side.

Are all interfaces in the bond 10Gb? I have seen unexpected behaviour on switches where the 10G ports were shut down if a 1G interface is in the mix.
Back to top
View user's profile Send private message
finalturismo
Apprentice
Apprentice


Joined: 06 Jan 2020
Posts: 159

PostPosted: Wed Oct 14, 2020 6:48 am    Post subject: Reply with quote

Princess Nell wrote:
Is that the Dell R720 from another thread ...

Can you provide some more detail on the network interface hw - lspci, type and model of card, number and type of ports etc. ethtool output for the slaves, contents of /proc/net/bonding/bond0.

Normal procedure if autonegotiation fails would be to lock it down on the switch - I assume it has a management interface? This may need to be combined with ethtool -s ... speed ... to do the same on the server side.

Are all interfaces in the bond 10Gb? I have seen unexpected behaviour on switches where the 10G ports were shut down if a 1G interface is in the mix.


Seems the issue is a bit different than i thought. So i just finished doing my SFP+ sort range om3 10GB fiber network setup. I have bonding set up on 2 ports.

I than ran iperf3 and got the following results.


Code:
[  5] local 77.77.77.4 port 56600 connected to 77.77.77.1 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   978 MBytes  8.20 Gbits/sec    0   1.14 MBytes       
[  5]   1.00-2.00   sec   986 MBytes  8.27 Gbits/sec    0   1.18 MBytes       
[  5]   2.00-3.00   sec   965 MBytes  8.09 Gbits/sec    0   1.24 MBytes       
[  5]   3.00-4.00   sec   968 MBytes  8.12 Gbits/sec    5   1000 KBytes       
[  5]   4.00-5.00   sec   952 MBytes  7.99 Gbits/sec   45   1.06 MBytes       
[  5]   5.00-6.00   sec  1004 MBytes  8.42 Gbits/sec    0   1.18 MBytes       
[  5]   6.00-7.00   sec   970 MBytes  8.14 Gbits/sec  120   1.12 MBytes       
[  5]   7.00-8.00   sec   971 MBytes  8.15 Gbits/sec    0   1.18 MBytes       
[  5]   8.00-9.00   sec   970 MBytes  8.14 Gbits/sec    0   1.20 MBytes       
[  5]   9.00-10.00  sec   971 MBytes  8.15 Gbits/sec    0   1.21 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  9.51 GBytes  8.17 Gbits/sec  170             sender
[  5]   0.00-10.01  sec  9.50 GBytes  8.16 Gbits/sec                  receiver


Here are the results iam getting on iperf3 during an rsync backup. Iam actually getting about 1100MB/s

I was using dd over ssh to test my speed before and i guess i wasnt seeing this XD.

But the problem iam having now is why are my ssh speeds capped at around 140MB/S

Both my systems have a RAID10 btrfs setup running and can easily sustain a minimum of 500MB/s write speed.

I would like to keep using SSH for everything, but if i have to use NFS i will. Anyway i can change encryption method or anything settings i need to change to get my SSH transfer speeds up?

My rsync is also stuck around 140MB/S
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Networking & Security All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum