I deal with a lot of network devices for ThinkComputers and split the duties with Sean for BIOS LEVEL.
We generally run a few tests:
- Disk -> network -> disk via SFTP, SMB, CIFS : tests bandwidth/throughput with bottlenecks in protocol, CPU, and hard disk performance
- Disk -> network -> disk via ye olde netcat : Bottle neck in hard drive (and IP stack, but it's fairly consistent across Linuxes, I think)
- RAM disk -> network -> RAM disk via netcat : Bottleneck is RAM speed
I may have omitted a test or two, but I'll send a link to this thread to Sean so he can add to it, too.
How do others test network performance?
Devices I've tested are primarily routers, switches/hubs/bridges, and NAS devices. I have yet to get my hands on a Killer NIC.
I was thinking about a test suite in which one computer runs a program which performs each of the tests automatically, but handles the file/RAM management operations.
What do you think?
Last edited by Rhettigan; 02-11-2008 at 03:38 PM.
/dev/zero -> network -> NFS share -> disk
disk -> NFS share -> network -> /dev/null
NFS protocol is quicker than CIFS/SMB or sFTP.
In testing I try not to have the book-end target and source both be disks because typically that is not how an application accesses it's data over a network share. Also one or the other disk may end up being the bottleneck as far as sustained throughput (e.g. many gigabytes sequentially read) is concerned.
Oh and don't use /dev/random or /dev/urandom as sources since they're typically rather slow to output their random data
With any given NIC / network, I'm usually already happy if the throughput I get is just above the theoretical max speed of the next slower generation or technology.
E.g. theoretical max. throughput of a 100 Mbit ethernet NIC is about 10-12 MB/sec so if that, or a bit more, is what I get out of Gigabit Ethernet I'm usually already content.
Last edited by Swoopy; 02-12-2008 at 06:03 AM.
I wish I could always use NFS (I can't believe I omitted it from my list above!), but I've found that a couple of NASes don't support it. On top of that, Windows folks aren't going to use NFS.
Originally Posted by Swoopy
Perhaps a benchmark using both CIFS and NFS is appropriate. I think I've done this with most of my NAS reviews--I'll have to go back and check (I just woke up and am a little blurry-eyed because of 8 inches of snow in the past 6 hours).
Originally Posted by Rhettigan
I have been using Sun's uperf (GPL) which allows a number of client to bomb a single server. Tests with small tcp packets (< 100 bytes with tcp_nodelay) represent what the system is capable of handling in terms of interrupts + dma processing and is representative of the limitations that happen when the system is not able to scale out the network load on the multi-core architecture.
A lot is hapening right now on linux in this area in the latest kernels in combination with the latest multi-queue NICs from Intel (based on 82576).
So for these tests, what are you wanting to actually test?
It seems like you are talking about a "system" level test, ie you have a complete (client/server) system deployed and you are moving parts to get an idea of the system performance.
Are you looking at throughput of the actual OS/NIC/PHY against a standardized host?
As you imply, the performance tests that you are doing are testing multiple parts of the system and in a lot of cases it would make it *very* difficult to keep the variables seperate to effectively make assertions about slower of faster parts.
It would be great to see a client/server based test architecture available with PTS.
You can do this kind of thing if you pick a "baseline" configuration and then work from that. Back in the day we used to use the crummy NE2000 cards as a baseline, so we could say "my XXX card is 42.4% faster than an NE2000" and that was something that we could all work with. Today we could pick a Realtek RTL8169 (or something similar) because it comes on suoer-cheap cards that anyone can afford to purchase to use as a baseline.
Originally Posted by Wuppermann
What might also be quite nifty is to have a bake-off. Give people a budget and say "make a computer whose total cost is less than $XXX and enter it into our contest." The competition is to build the fastest possible file server within the price constraint. Of course tweaking counts as part of the competition and people will publish and discuss their tweaks. This brings out the creative spirit and we will collectively come up with some pretty nifty solutions.