Announcement

Collapse
No announcement yet.

Windows Server 2019 vs. Linux vs. FreeBSD Gigabit & 10GbE Networking Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • gclarkii
    replied
    Originally posted by edwaleni View Post

    I only brought up brand names because cards from those NIC chipset suppliers tend to run higher in price, especially when using a fixed physical connection type.

    Ultimately I think a good test would be to connect 1 server, using the same NIC against a Xena 40Gbps test platform. Boot each OS and run it through the paces.

    https://xenanetworks.com
    Why do you need a Xena? Duplicate a Netflix OCA sans harddrives and memory amount, make sure you give it a decent ethernet, the chelsio dual 10GB, quad 10GB or 100GB cards work quite well and you can pickup used Chelsio dual 10GB cards for about $40, put two of those in and boom you've got 40GB to play with. Netflix tells you what needs to be tuned to push traffic.

    I've got no doubt I could build something to give a network a very good workout with a price tag of anywhere from $300 to $1000 depending on how much I want to stress the net. Unless you really need the massive number of ports the XenaNetworks various platforms support, why even think about spending that kind of money?

    This is all assuming that your testing a single box or maybe two. If you need more than 40 to 60Gb/s I would get a Chelsio T6(runs around $600) and you'll get 100Gb line speed.
    https://www.chelsio.com/wp-content/uploads/resources/T5-40Gb-FreeBSD-Netmap.pdf
    https://www.chelsio.com/wp-content/u...DP-FreeBSD.pdf
    https://www.chelsio.com/wp-content/u...d-toe-epyc.pdf

    Leave a comment:


  • DanglingPointer
    replied
    Hi Michael,

    Just to let you know, perhaps the cheapest consumer 10Gbe card you can buy brand new with out-of-the-box Linux support (no need to download drivers or compile stuff unlike Mellanox) is the Asus XG-C100C which is a consumer card. Cost less than a $100 USD and uses RJ45 copper...
    https://www.asus.com/au/Networking/XG-C100C/

    I don't work for Asus nor affiliated with them. Just that I stumbled across them at an online store and bought a couple for direct cross-over networking (server to back-up server).

    Would be great if you could ping Asus for a sample! Then run your magic!

    Leave a comment:


  • Michael
    replied
    Originally posted by microcode View Post
    Was Clear Linux not playing nice with the 10GBase-T machine?
    Written in the article, but Clear Linux didn't ship the Mellanox driver. It did play fine though with the QLogic as well as the 10GbE controller in the 2P EPYC server.

    Leave a comment:


  • microcode
    replied
    Was Clear Linux not playing nice with the 10GBase-T machine?

    Leave a comment:


  • spiritofreason
    replied
    These Mellanox adapters have a single 10GbE SPF+ port and are half-height cards. These are among the low-cost 10GbE SPF+ network adapters[
    It's an SFP+ (Small Form-factor Pluggable) port, not an enhanced sunscreen

    Leave a comment:


  • indepe
    replied
    Originally posted by edwaleni View Post
    I have to assume the frame size is at default, 1518 bytes.

    [...]
    One of the screenshots shows a MTU of 1500.

    Leave a comment:


  • Michael
    replied
    Originally posted by vw_fan17 View Post
    For those who run stuff like this at home.. I know, I don't NEED it, I just WANT it :-)

    What are the cables used to connect Mellanox cards? IIRC, some kind of fiber / coax that gets really expensive for longer runs, like wiring up a house?
    Cables I used (and other parts) outlined in: https://www.phoronix.com/scan.php?pa...-4distro&num=2

    At least for the shorter runs, the cables weren't expensive. Haven't looked at the cost of any long runs yet until having a 10GbE backbone rather than just this one rack.

    Leave a comment:


  • vw_fan17
    replied
    For those who run stuff like this at home.. I know, I don't NEED it, I just WANT it :-)

    What are the cables used to connect Mellanox cards? IIRC, some kind of fiber / coax that gets really expensive for longer runs, like wiring up a house?

    Leave a comment:


  • edwaleni
    replied
    Originally posted by indepe View Post

    Somehow I'm not sure if your response suggests that I see a problem with SFP+ or with Mellanox as a brand, which is (of course) not the case at all. From what I read, at least a ~$200 single port Mellanox card is able to fully saturate even a 100 Gb line, using a single and simple Xeon CPU, so I am definitely not talking about the brand being used.

    Perhaps a $20 network card will not be a bottleneck in the context of testing latency and throughput on a 10 Gb network, but without that being tested by itself, I'd wonder if the tests are not just depending too much on an OS's ability to deal with the specific shortcomings of a specific card.
    I only brought up brand names because cards from those NIC chipset suppliers tend to run higher in price, especially when using a fixed physical connection type.

    Ultimately I think a good test would be to connect 1 server, using the same NIC against a Xena 40Gbps test platform. Boot each OS and run it through the paces.

    https://xenanetworks.com

    Eliminates variances, questions about the switch forwarding rates, and isolates the traffic to one activity (instead of three).

    Since I suspect Warren Buffet is not an active reader of Phoronix, I doubt Michael can set aside the needed Benjamins to acquire the said Xena. Therefore he has to test with the best possible combination, Mellanox cards, Ubiquiti switch and another 10GbE capable host. This would probably be a good representation of what his test users would use as well.

    I finally got time to read through the Ethr documentation, There are still a lot of enhancements in its pipeline, but its good to see that PTS has something to work with.

    Leave a comment:


  • indepe
    replied
    Originally posted by edwaleni View Post

    Why would he spend $200+ and the only benefit is a fixed copper or fiber connection type?

    The largest cost of 10GbE is the physical connection. By buying an adapter that has an empty SFP+ slot, he maintains flexibility of using different physical connection types (which can be expensive) and can still use low cost passive coax for near distant equipment.

    Buying a $300+ dual port Intel or Broadcom based adapter is fine and dandy if you need 24x7 support, but for simple throughput testing with proven and supported devices, the Mellanox NIC is perfect.
    Somehow I'm not sure if your response suggests that I see a problem with SFP+ or with Mellanox as a brand, which is (of course) not the case at all. From what I read, at least a ~$200 single port Mellanox card is able to fully saturate even a 100 Gb line, using a single and simple Xeon CPU, so I am definitely not talking about the brand being used.

    Perhaps a $20 network card will not be a bottleneck in the context of testing latency and throughput on a 10 Gb network, but without that being tested by itself, I'd wonder if the tests are not just depending too much on an OS's ability to deal with the specific shortcomings of a specific card.

    Leave a comment:

Working...
X