Announcement

Collapse
No announcement yet.

Network throughput

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Network throughput

    I deal with a lot of network devices for ThinkComputers and split the duties with Sean for BIOS LEVEL.

    We generally run a few tests:
    • Disk -> network -> disk via SFTP, SMB, CIFS : tests bandwidth/throughput with bottlenecks in protocol, CPU, and hard disk performance
    • Disk -> network -> disk via ye olde netcat : Bottle neck in hard drive (and IP stack, but it's fairly consistent across Linuxes, I think)
    • RAM disk -> network -> RAM disk via netcat : Bottleneck is RAM speed


    I may have omitted a test or two, but I'll send a link to this thread to Sean so he can add to it, too.

    How do others test network performance?

    Devices I've tested are primarily routers, switches/hubs/bridges, and NAS devices. I have yet to get my hands on a Killer NIC.


    I was thinking about a test suite in which one computer runs a program which performs each of the tests automatically, but handles the file/RAM management operations.

    What do you think?
    Last edited by Rhettigan; 11 February 2008, 04:38 PM.

  • #2
    /dev/zero -> network -> NFS share -> disk
    and
    disk -> NFS share -> network -> /dev/null

    NFS protocol is quicker than CIFS/SMB or sFTP.
    In testing I try not to have the book-end target and source both be disks because typically that is not how an application accesses it's data over a network share. Also one or the other disk may end up being the bottleneck as far as sustained throughput (e.g. many gigabytes sequentially read) is concerned.
    Oh and don't use /dev/random or /dev/urandom as sources since they're typically rather slow to output their random data

    With any given NIC / network, I'm usually already happy if the throughput I get is just above the theoretical max speed of the next slower generation or technology.
    E.g. theoretical max. throughput of a 100 Mbit ethernet NIC is about 10-12 MB/sec so if that, or a bit more, is what I get out of Gigabit Ethernet I'm usually already content.
    Last edited by Swoopy; 12 February 2008, 07:03 AM.

    Comment


    • #3
      Originally posted by Swoopy View Post
      /dev/zero -> network -> NFS share -> disk
      and
      disk -> NFS share -> network -> /dev/null

      NFS protocol is quicker than CIFS/SMB or sFTP.
      I wish I could always use NFS (I can't believe I omitted it from my list above!), but I've found that a couple of NASes don't support it. On top of that, Windows folks aren't going to use NFS.

      Perhaps a benchmark using both CIFS and NFS is appropriate. I think I've done this with most of my NAS reviews--I'll have to go back and check (I just woke up and am a little blurry-eyed because of 8 inches of snow in the past 6 hours).

      Comment


      • #4
        Originally posted by Rhettigan View Post
        I deal with a lot of network devices for ThinkComputers and split the duties with Sean for BIOS LEVEL.

        We generally run a few tests:
        • Disk -> network -> disk via SFTP, SMB, CIFS : tests bandwidth/throughput with bottlenecks in protocol, CPU, and hard disk performance
        • Disk -> network -> disk via ye olde netcat : Bottle neck in hard drive (and IP stack, but it's fairly consistent across Linuxes, I think)
        • RAM disk -> network -> RAM disk via netcat : Bottleneck is RAM speed


        [...]

        How do others test network performance?

        Devices I've tested are primarily routers, switches/hubs/bridges, and NAS devices. I have yet to get my hands on a Killer NIC.

        [...]

        What do you think?

        I have been using Sun's uperf (GPL) which allows a number of client to bomb a single server. Tests with small tcp packets (< 100 bytes with tcp_nodelay) represent what the system is capable of handling in terms of interrupts + dma processing and is representative of the limitations that happen when the system is not able to scale out the network load on the multi-core architecture.

        A lot is hapening right now on linux in this area in the latest kernels in combination with the latest multi-queue NICs from Intel (based on 82576).

        Jose

        Comment


        • #5
          So for these tests, what are you wanting to actually test?

          It seems like you are talking about a "system" level test, ie you have a complete (client/server) system deployed and you are moving parts to get an idea of the system performance.

          Are you looking at throughput of the actual OS/NIC/PHY against a standardized host?

          As you imply, the performance tests that you are doing are testing multiple parts of the system and in a lot of cases it would make it *very* difficult to keep the variables seperate to effectively make assertions about slower of faster parts.

          It would be great to see a client/server based test architecture available with PTS.

          Comment


          • #6
            Originally posted by Wuppermann View Post
            So for these tests, what are you wanting to actually test?

            It seems like you are talking about a "system" level test, ie you have a complete (client/server) system deployed and you are moving parts to get an idea of the system performance.

            Are you looking at throughput of the actual OS/NIC/PHY against a standardized host?

            As you imply, the performance tests that you are doing are testing multiple parts of the system and in a lot of cases it would make it *very* difficult to keep the variables seperate to effectively make assertions about slower of faster parts.

            It would be great to see a client/server based test architecture available with PTS.
            You can do this kind of thing if you pick a "baseline" configuration and then work from that. Back in the day we used to use the crummy NE2000 cards as a baseline, so we could say "my XXX card is 42.4% faster than an NE2000" and that was something that we could all work with. Today we could pick a Realtek RTL8169 (or something similar) because it comes on suoer-cheap cards that anyone can afford to purchase to use as a baseline.

            What might also be quite nifty is to have a bake-off. Give people a budget and say "make a computer whose total cost is less than $XXX and enter it into our contest." The competition is to build the fastest possible file server within the price constraint. Of course tweaking counts as part of the competition and people will publish and discuss their tweaks. This brings out the creative spirit and we will collectively come up with some pretty nifty solutions.

            Comment

            Working...
            X