Announcement

Collapse
No announcement yet.

AMD Shanghai Opteron: Linux vs. OpenSolaris Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by kebabbert View Post
    http://www.phoronix.com/forums/showt...t=14017&page=5

    "It is possible some of the performance differences are due to the
    gcc version being used as Solaris bundles gcc 3.4.3 and other distros
    may bundle 4.x, but it is much more likely due to the default ABI
    used by the bundled gcc. "gcc -O" on Solaris will default to ia32/x87,
    whereas on the other "64 bit" distros tested it will default to amd64.
    The performance difference can be seen in the two Byte Computational
    benchmarks on page 7 where Solaris appears to lag: Dhrystone 2
    (./Run dhry2) and Floating-Point Arithmetic (./Run float). These
    tests are compiled with "gcc -O" which produces ia32/x87 code.
    When adding "-m64" which puts Solaris on par with the other distros,
    the performance jumps quite a bit. Measured on a Intel QX6700,
    dhry2 goes from 9307771.4 to 13421763.5 and float goes from
    707932.5 to 1477185.6.
    The ABI used can make a big difference. Solaris allows you to
    choose either, but the default for the bundled gcc is still the
    slower ia32/x87."
    Those are not the results, but Sun guy's talk. It will be great if some of you guys will do benchmarks itself. Same hardware, same compilers, same flags etc. I can do tests, but only in vbox. I wrote in my first post in this thread that I don't consider this Phoronix benchmark was fair.


    I doubt that Linux scales well. Ive seen numerous articles where companies say that when their workload increases beyond a certain point, Linux doesnt cut it anymore. And then they switch to Solaris on the same hardware, and everything is fine. Ive posted several links showing this in my posts, on my link above.
    It reminds me "Solaris is better Linux than Linux :> I just don't believe them and as I mentioned already they could be using this crappy malloc library.

    And, in your link that moron brags about Linux v2.6 being able to handle 16 cpus. I assume he talks about standard linux kernel. At the same time he talks about large Linux clusters, but fail to tell they are modified Linux kernels.
    If modified Linux kernel scales better than modified/unmodified Solaris kernel... I bet you can download this modified kernel from somewhere. Btw. what are those modifications? Modified settings, code?



    If Solaris scales well on hundreds of CPUs and many more threads, it is not something that affects an ordinary user, right? So normal people which are using dual cpus or so, doesnt notice bad Linux scaling. It is in companies, when their workload increases they have to switch to a real Unix. In my link above, I post an interview to Linux kernel hacker Andrew Morton, who claims that Linux code is full of bugs and regressions. Read the interview if you dont believe me.
    Right, normal people don't care too much about scaling. Do you think that companies are bunch of idiots who don't know how to properly setup Linux system for their needs? :> Andrew is objective. He's not a lier like others who claim that their system is 15% better in something, but it looks like it's not or there are NO bugs, NO regressions, it's most secure etc. I don't expect such objectiveness from Sun guys - Solaris is better Linux.... Only marketing crap, they won't tell you a word about bugs or performance problems (at least not so sincerely like Linux devs).

    And if Linux scales better than Windows, why people prefer using Windows?
    I don't really like to play in such games (maybe sometimes ). As you already wrote normal users don't notice bad scaling. MS has monopoly, windows is first thing what people see after launching their computers first time etc. etc. etc.

    Comment


    • #62
      Originally posted by kraftman View Post
      Those are not the results, but Sun guy's talk. It will be great if some of you guys will do benchmarks itself. Same hardware, same compilers, same flags etc. I can do tests, but only in vbox. I wrote in my first post in this thread that I don't consider this Phoronix benchmark was fair.
      Well, there are some people here that have tried to compile to 64bit, and they got improvements. Ecaterine got 12% improvement. This Sun guy got 80% and 100% improvments. Common sense and these numbers suggest that 64 bit will increase performance.



      Originally posted by kraftman View Post
      If modified Linux kernel scales better than modified/unmodified Solaris kernel... I bet you can download this modified kernel from somewhere. Btw. what are those modifications? Modified settings, code?
      You know, if SUN wanted to, they could also tailor Solaris kernel to big clusters. Google uses modified Linux kernels, they run Linux at low utilisation, but many more PCs instead. As i said, Linux is a simple kernel, quite non sophisticated. Easy to rip out code not necessary. For instance, do you think Google has drivers for web cams in their Linux kernels? No, Google has lots of skillfull coders. On the other hand Solaris is a mature kernel, sophisticated, complex code. Solaris kernel is not that easy to modify for a random hacker. If you are going to modify a kernel, better to modify a simple kernel such as Linux.




      Originally posted by kraftman View Post
      Right, normal people don't care too much about scaling. Do you think that companies are bunch of idiots who don't know how to properly setup Linux system for their needs? :> Andrew is objective. He's not a lier like others who claim that their system is 15% better in something, but it looks like it's not or there are NO bugs, NO regressions, it's most secure etc. I don't expect such objectiveness from Sun guys - Solaris is better Linux.... Only marketing crap, they won't tell you a word about bugs or performance problems (at least not so sincerely like Linux devs).
      You know, OpenSolaris is.... OPEN. That means there are lots of bug reports etc on the official forums, the discussions are open. You can read about the bugs yourself, if you think SUN is trying to hide bugs in Solaris.

      Kernighan and Richie, the programming Gurus, had studied Linux source code and said that the quality was low. It was not good code.

      You know, Linus Torvalds sees the Linux kernel development as biology evolution, he explained. If there are some problems, he will redesign that part. This way the kernel evolves and becomes better and better.

      I dont agree with that. If you redesign everything all the time, then you get... Windows. You know, each new version of Windows sucks, is error prone and full of bugs. After a few Service Packs, then Windows becomes stable. Linus redesigns big parts all the time and this causes all these regressions Andrew talks about. Linux kernel is over 10 million lines of code. That is quite sick. The whole Windows NT was 10 millions LOC. Now you have ONE kernel that big. It must be impossible to get that huge monolithic piece of code, stable. Just as Andrew explains.

      As I said, several startup companies are using Linux. And everything is fine. But sooner or later, their workload increases a lot beyond what them people are used to. And they venture into the realms of Enterprise. Then they notice that Linux doesnt cut it anymore.

      Whatever. I am not going to argue with you over this matter anymore. If you prefer Linux and I prefer Solaris, then we both are happy.



      PS. I am interested in SUNs new machines. Rumours say that it will have 2048 threads. I think lots of OS will have problems handling 2048 threads.
      Last edited by kebabbert; 11 February 2009, 09:36 AM.

      Comment


      • #63
        Originally posted by Kano View Post
        Some of the benchmarks are just are a bit pointless. ... Using compile benchmarks between os is like comparing apples to pears.
        It's unconventional to bench whole different systems like that, but I think it has its uses. Most Ubuntu users will use the default everything, so stuff they compile will compile as fast as the benchmarks show. If the OpenSolaris benchmarks were done with the compilers that most people actually use, you would be able to compare the end-user experience of how long you have to wait for things to compile with optimization on.

        Sometimes you want to know whether you like apples or whether you like pears, not which kind of apples you like best. Err, probably nobody will choose a platform because the default compiler is fast, but it's still interesting to do things that compare how all the differences between compiling and running something on Ubuntu vs. OpenSolaris stack up. Or it would have been if the non-Java stuff had been compiled with a useful compiler.

        Comment


        • #64
          Originally posted by kebabbert View Post
          To me it seems that Linux people thinks that scalability is about 4-8 CPUs. The Solaris people talks about hundreds of CPUs. With the very same kernel, unmodified. Now THAT is scalability.
          It used to be news for Linux to scale up to to 4-8 CPUs. That was years ago. Linux started as a kernel for uniprocessor home desktops like a 386. It wasn't until many kernel devs got jobs with companies that gave them nice SMP desktops that Linux really started to be tuned more for big machines with lots of RAM than to make more efficient use of a smaller machine. (before multi-core CPUs were common, but even more so now that many of them have multi-core multi-socket machines) Now that a lot of kernel devs actually have 8 core machines, the a lot of algorithms in the kernel (e.g. the scheduler) have been replaced with code that scales well (well beyond 16 cores, if I understand the situation correctly). So don't make too much of stuff from a few years ago talking about how it's so awesome that Linux now scales to 8 cores. Because that was news at the time, but once the kernel devs were all working scaling to bigger machines, they made significant progress. (Honestly, I don't know how good Linux is now on big iron. I've never adminned or benchmarked anything more than dual socket quad core.)

          I dont get it. What is wrong with the Linux people? ... I really really wonder, how can the Linux people call this fair?
          People are people. The not-so-clever ones who don't know one or more of statistics, computer architecture, operating system design, or basic Unix, sometimes still use Linux and spout off about it. (And yes, I am going to stick my neck out and claim I know what I'm doing in those areas.) You're right that Linux seems to inspire maybe too much advocacy zeal in people, and you end up seeing a lot of not-well-informed stuff about Linux more so than any other OS. Probably even than Windows, since people who don't know a lot about it or other OSes don't have the impression they're supposed to tell everyone how great it is.

          So, just because lots of people say dumb stuff doesn't mean that all Linux people agree with them. Some of the examples you give are pretty embarrasing.

          edit: I forgot to say anything about different kernel binaries for server vs. desktop. Linux doesn't have different implentations of anything important (scheduler, vm, networking) for small vs. large machines, but Linux is designed to be configured. Linux has always been open source, so it's not surprising that it's well set up to be compiled by the end user according to their customized config. e.g. if you compile for a uniprocessor machine, some of the lock/unlock functions become macros that expand to do { } while(0). Even if openSolaris is only designed to have a single kernel binary for all systems, I don't think it's fair to hold Linux to that standard. But like I said, there aren't kernel config options that select many-core tuned versions of algorithms vs. few-core tuned versions. There are kernel config options for some specific bigiron hardware, i.e. amd64 kernels can be built for standard PC-compatible (CONFIG_X86_PC) or ScaleSMP vSMP systems (CONFIG_X86_VSMP), whatever they are. ia32 kernels have config options for a few different kinds of older big-iron hardware.

          These days, Ubuntu has a -generic kernel and a -server kernel. -generic, which is tuned for desktops, uses voluntary kernel preemption of kernel threads for lower latency but worse throughput. Before tick-less kernels, it use timer interrupt HZ=250. The -server kernel doesn't preempt the kernel, and used HZ=100 before Linux went tick-less. The only other difference is defaulting to the deadline iosched instead of cfq. (I'm looking at debian/configs/amd64/config.server and config.generic in Ubuntu's kernel git tree.)
          There are a lot more differences between the i386 generic and server kernels. e.g. -server uses PAE to handle up to 64GB on a 32bit machine, and is built with CONFIG_M686=y instead of CONFIG_M586=y. And it builds in more Xen stuff...

          server vs. non-server seems like a good place to have different kernels. There is even a different installer installer for Ubuntu's server flavour (which installs a different set of packages by default).

          Anyway, Linux has gained some features which make one-binary-everywhere work better. Since that's what distros want to ship to avoid confusing newbies. e.g. on a uniprocessor machine, the kernel will, at runtime, patch the x86 lock prefix in all the inline spinlocks to a nop, so the spinlocks just increment/decrement a counter without serializing the CPU. It can even patch it back if a CPU is hot-plugged later. (This is most likely to happen inside a vm, I imagine.)


          So my point here is that scalability from a single code base is what counts, not from a single binary from that one code base. But it only counts if scalability-enabling alternate bits of code don't change the API for drivers and stuff. So I think it would be fair to have a few different algorithms in the VM or something that could be selected, but only if drivers that allocated memory didn't have to have two code paths for the different settings of the config option. That would be scalability at the cost of huge maintenance overhead for other code. In general, the kernel devs aren't keen on even that. AFAICT, they always choose more scalable algorithms and default settings of tuning params.


          Notice that I'm not claiming that Linux scales well up to 256 cores or anything like that. I just don't know. I think it does ok, but I have no experience to back that up. Scalability is definitely one of the ongoing development goals, though, so Linux still has plenty of room for improvement, to put it in a glass-half-full light. I've read a paper from 2006 on tuning XFS for a 24 CPU ia64 machine with hundreds of disks, so Linux does at least have a pretty scaleable filesystem thanks to SGI. http://oss.sgi.com/projects/xfs/pape...2006-paper.pdf (or slides from the talk http://oss.sgi.com/projects/xfs/pape...esentation.pdf). Other good links on some guy's web site: http://www.sabi.co.uk/Notes/linuxFS.html
          Last edited by Peter_Cordes; 11 February 2009, 11:14 AM.

          Comment


          • #65
            OK, so I took the time and benchmarked several of the tests as they appear in the article. The benchmarks were done inside VirtualBox VMs running in Ubuntu 8.10. One VM was OpenSolaris x64 2008.11, the other was Ubuntu x64 8.10. All the OpenSolaris programs were compiled with flags taken from here.

            So here's what I have:
            Code:
                                       SOLARIS         LINUX
            
            lame                       23.17           23.02
            oggenc                     23.51           19.11
            GraphicsMagic - HWB        74              50
            GraphicsMagic - LAT        17              10
            GraphicsMagic - resize     47              33
            GraphicsMagic - sharpen    20              10
            As you see, all the results are much better for OpenSolaris, than in the Phoronix article. Strikingly, the trend in GraphicsMagic tests is reversed compared to the article: Solaris not only doesn't suck, but wins with x1.5 up to x2.0 better results. One thing to note is that VirtualBox only allows one CPU core per VM, so the results might be different in GraphicsMagic tests.

            Overall, the article results seem to be completely detached from reality.
            Last edited by flice; 12 February 2009, 03:17 AM.

            Comment


            • #66
              Originally posted by flice View Post
              OK, so I took the time and benchmarked several of the tests as they appear in the article. The benchmarks were done inside VirtualBox VMs running in Ubuntu 8.10. One VM was OpenSolaris x64 2008.11, the other was Ubuntu x64 8.10. All the OpenSolaris programs were compiled with flags taken from here.

              So here's what I have:
              Code:
                                         SOLARIS         LINUX
              
              lame                       23.17           23.02
              oggenc                     23.51           19.11
              GraphicsMagic - HWB        74              50
              GraphicsMagic - LAT        17              10
              GraphicsMagic - resize     47              33
              GraphicsMagic - sharpen    20              10
              As you see, all the results are much better for OpenSolaris, than in the Phoronix article. Strikingly, the trend in GraphicsMagic tests is reversed compared to the article: Solaris not only doesn't suck, but wins with x1.5 up to x2.0 better results. One thing to note is that VirtualBox only allows one CPU core per VM, so the results might be different in GraphicsMagic tests.

              Overall, the article results seem to be completely detached from reality.

              Flice I'm assuming you used SunCC for the above correct? It would be interesting to see how gcc3.4 behaves in your VM. At least then people could draw parallels between it and SunCC when it comes to Phoronix test suite.

              Thanks for the results.

              Comment


              • #67
                Dear Phoronix benchmarkers. I guess this is a futile attempt, but anyway:

                It's ok when you try to compare operating systems in a workloads that don't really stress operating systems - e.g. by running programs that spend almost all of their cpu time in userland. Doing real benchmarks - with, say, PostgreSQL or Apache or Postfix - is much harder, and you're not being paid for this, so thanks for doing at least _some_ benchmark.

                _However_ - when you benchmark something that depends mostly on a compiler version and flags, and then say that operating system A using gcc version N performs better than operating system B using gcc version M, then this is no longer a benchmark - it's something between FUD and pure stupidity.

                Comment


                • #68
                  Originally posted by stan View Post
                  The biggest disadvantage to multiple kernels is that the engineering effort is split. There's no reason why Sun can't channel their innovation through Linux (and they're doing that to a certain extent by hiring Yinghai Lu, formerly of AMD, to work on Linux). But as it stands, the CDDL is a roadblock to Linux developers incorporating that innovation.
                  Actually, the better way would be the other way around. The nice thing Sun could use from Linux are device drivers - and pretty much nothing more. In particular, the "core" in OpenSolaris is years ahead of Linux, which still uses anachronic synchronisation model based on spinlocks (which nobody uses anymore, except probably Windows - other operating systems are based on fully functional mutexes and interrupt threads), VFS working backwards (file-based instead of vnode-based) etc.

                  Comment


                  • #69
                    Yes, I was using the Sun Studio compiler. Here are the results from gcc 3.4:

                    Code:
                    lame                       23.75
                    oggenc                     29.10
                    GraphicsMagic - HWB        40
                    GraphicsMagic - LAT        9
                    GraphicsMagic - resize     30
                    GraphicsMagic - sharpen    14
                    BTW, my machine is Intel c2d, not AMD.

                    Originally posted by Dubhthach View Post
                    Flice I'm assuming you used SunCC for the above correct? It would be interesting to see how gcc3.4 behaves in your VM. At least then people could draw parallels between it and SunCC when it comes to Phoronix test suite.

                    Thanks for the results.

                    Comment


                    • #70
                      llama

                      I think that it is non trivial to scale big, up to big iron. I dont think that some random kernel hackers can easily do that in a few years. Solaris is almost 30 years old. First it didnt scale well, either, it was like Linux is now. After several years and experience, SUN totally redesigned the kernel and now it scales well. It takes lots of experience and expertise to do that. Otherwise, all OS would scale well. But they dont, scalability is a difficult subject. I think Linux is in a similar phase as Solaris was in it's first iterations. Solaris has been polished for many years. Even Windows with service pack 4000 would scale well and be rock solid, dont you think? And if Windows get redesigned all the time, then it will only reach SP2 before complete rewrite. Then you can not get it stable. It is better to have a sound architecture and polish it well. To reach maturity.

                      Therefore I dont agree with you, when you say that Linux scales well, but you (and the kernel hackers) only have experience of machines up to 4 CPUs. How can you or the Linux people claim that, without knowing? How many Linux kernel hackers have experience of big iron? If no one has access to big iron for doing Linux kernel development, how can Linux scale well on big iron? As I said, the Linux folks mostly only have access to powerful desktop PCs and therefore targets 4-8 CPUs, what they are used to. When Linux people gets access to big iron with hundreds of CPUs and many more threads, and after several years of experience and when they get used to big iron, I expect Linux to scale well on these machines. But not until then. Lots and lots of experience is needed to develop for big iron.

                      As I said, I dont agree with people saying "Linux scaling well" if there are different Linux kernels for large clusters and for normal desktop PCs. Then you could also switch between FreeBSD kernels for one specifik task, and to Linux kernels for doing another task. As you do now, when switching between different Linux kernels for different tasks. That is clearly not "scalability", but rather "flexibility". Otherwise, what would Linux people call Solaris' ability to run the very same binaries on laptops to big iron? True Solaris scalability vs False Linux scalability?

                      Regarding Linux server vs non server. There is only one version of Solaris. And it can function in both roles. True scalability again. But if you want, you change Solaris scheduler on the fly, during run time. Is that possible with Linux, or do you have to use a special esoteric Linux version to allow that? Or do you have to recompile the kernel?



                      Regarding the CDDL vs GPL. The problem with CDDL is that it allows licensing on file-per-file basis. That is a big problem, according to Linux people. Apple and FreeBSD and QNX people dont agree. They just lift in the ZFS source code files into their OS without problems. They can use their code with CDDL files without problems. Mixing of licenses are allowed. GPL on the other hand, dont allow mixing of licenses. Everything must be GPL - or it is not allowed. Some people think that GPL is quite egocentric license.

                      Comment

                      Working...
                      X