Announcement

Collapse
No announcement yet.

AMD Shanghai Opteron: Linux vs. OpenSolaris Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by kebabbert View Post
    What do you think about several theorems that tell almost the same thing, or one big theorem that solves all cases? Which do you prefer? Several theorems that are used depending on different situations, or one theorem that is always used?
    I would say model, not theorem. I understand how things work in terms of a mental model that's like a simple simulator that I predict their behaviour with. I think that's what you're trying to get at. I don't quite know what "solve" is an analogy for when it comes to understanding operating systems, though.

    Different size systems have different bottlenecks, and sometimes you only have to worry about a few of them. I'm trying to think back to things I've done that required understanding kernel behaviour. In the cases I'm thinking of, it's always been some small aspect of the kernel that mattered for what I was doing. So I'd say I have different models of the different parts of the system. If I needed to, I could think about how those pieces fit together to unstand how Linux as a whole works (when running on hardware I know enough about, which for me these days is just AMD64 PCs. I have a vague idea of how highmem on ia32 works, but it sounded horrible so I put that on my list of good reasons to use amd64 and wish ia32 would curl up and die.)

    On smaller machines, you have to know e.g. does grep -r on a big source tree start to make your desktop swap out, so the programs you have open are slow while they page back in again. (the answer is yes if vm.swappiness is set to 60 (a common default), so set it to more like 20 or 30 if you run e.g. bittorrent on a desktop.) http://www.sabi.co.uk/blog/ has some good comments about Linux's VM, and GNU/Linux desktops, being written by well-funded devs with their big fancy machines, and how GNU/Linux has serious weaknesses on memory-constrained systems. If you have plenty of RAM (relative to what you're doing), you don't have to care about lots of VM behaviour.

    Another aspect of Linux that I remember figuring out was when I wanted to run cycle-accurate benchmarks of a routine I was optimizing. (http://gmplib.org/list-archives/gmp-...ch/000789.html) I ran my benchmark loop at realtime priority, so it would have 100% of a CPU core. When Linux scheduled it on the same CPU that handled keyboard interrupts, it froze the system until it was done. I found out that Linux (on my system, check your own /proc/interrupts) handles all interrupts on CPU0, and you can give a process all of a CPU without breaking your system by using taskset 2 chrt, to put it on the other CPU core. So for this I had to think only about how Linux's scheduler and interrupt handling worked (on amd64 core2duo). I didn't have to think about the network stack, the VM, the VFS, or much else.

    Another time I was curious how Linux decides whether to send a TCP ACK by itself, or let a data packet ACK receipt of packets coming the other way on a TCP stream with data flowing in both directions. I never dug in enough to find out why it decides to sometimes send empty ACK packets, but not always. This behaviour isn't (AFAIK) connected to the scheduler, VM, or much outside the network stack.

    So a lot of the things that are coming to mind that I've wanted to know about Linux have been possible to understand in isolation. Which sounds to me like your "multiple theories". But if by situation you mean workload and machine type, not what part of the kernel you're trying to understand, then I think I tend to understand things in enough detail that those things would be parameters in my mental model, so it's really the same model over all conditions for whatever part of the kernel I'm trying to grok.

    I operate by delving into the details. I love details. (and, since I have ADHD, I have a hard time not getting wrapped up in details, as everyone can probably tell from my posts!) At the level of detail I like to understand, a complete theory of how Linux behaves on a whole range of hardware would be more than I could keep in my head.

    This is maybe why I never saw eye to eye with on your wish for a complete theory that you could just remember that would tell you everything about how Linux worked. I didn't really say anything before, because I couldn't think of a polite way to say that didn't make any sense to me.

    Comment


    • #92
      Originally posted by kraftman View Post
      Famous troll is back.
      Yeah, nice to meet you. :->

      Originally posted by kraftman View Post
      Linux is using mutexes. You want me to believe that Linux drivers are most important things in HPC and other areas? Does FreeBSD or OpenBSD use mutexes? I saw your trolling for years on some portals :> Why the hell OpenSolaris hung for about 10 seconds when I clicked on Firefox icon (in Sun's vbox and other systems work like a harm in it)? It seems in this case mutexes aren't helpfull. Can you give me some proofs?

      2006:

      Thirty years ago, Linus Torvalds was a 21 year old student at the University of Helsinki when he first released the Linux Kernel. His announcement started, “I’m doing a (free) operating system (just a hobby, won't be big and professional…)”. Three decades later, the top 500 supercomputers are all running Linux, as are over 70% of all smartphones. Linux is clearly both big and professional.


      Dive into our comprehensive guide to understanding router login processes, IP addresses like 192.168.1.1, 10.0.0.1, and more. Learn how to access and manage your router's settings, check your private IP, and optimize your network using our easy step-by-step guide.

      What I was talking about was synchronisation in the kernel. What you're talking about above is synchronisation between userland threads. Two completely unrelated things. The fact that Linux uses spinlocks is one of the reasons that its performance drops noticeably under high load on many CPUs. Other operating systems use fully functional mutexes, along with interrupt threads.

      Comment


      • #93
        Originally posted by trasz View Post
        Yeah, nice to meet you. :->
        Nice to meet you too

        What I was talking about was synchronisation in the kernel. What you're talking about above is synchronisation between userland threads. Two completely unrelated things. The fact that Linux uses spinlocks is one of the reasons that its performance drops noticeably under high load on many CPUs. Other operating systems use fully functional mutexes, along with interrupt threads.
        I think they converted spinlocks to mutexes even it this area:




        Is this what you said based on some articles or you spent a while on searching lkml? :> Aren't you talking about problem with crappy malloc library?

        In this article is mentioned about pthreads mutex (it's 2000) and RTLinux:



        Mutexes are important for RT aren't they?
        Last edited by kraftman; 16 February 2009, 06:50 PM.

        Comment


        • #94
          Originally posted by llama View Post
          I would say model, not theorem. I understand how things work in terms of a mental model that's like a simple simulator that I predict their behaviour with. I think that's what you're trying to get at. I don't quite know what "solve" is an analogy for when it comes to understanding operating systems, though.

          Different size systems have different bottlenecks, and sometimes you only have to worry about a few of them. I'm trying to think back to things I've done that required understanding kernel behaviour. In the cases I'm thinking of, it's always been some small aspect of the kernel that mattered for what I was doing. So I'd say I have different models of the different parts of the system. If I needed to, I could think about how those pieces fit together to unstand how Linux as a whole works (when running on hardware I know enough about, which for me these days is just AMD64 PCs. I have a vague idea of how highmem on ia32 works, but it sounded horrible so I put that on my list of good reasons to use amd64 and wish ia32 would curl up and die.)

          On smaller machines, you have to know e.g. does grep -r on a big source tree start to make your desktop swap out, so the programs you have open are slow while they page back in again. (the answer is yes if vm.swappiness is set to 60 (a common default), so set it to more like 20 or 30 if you run e.g. bittorrent on a desktop.) http://www.sabi.co.uk/blog/ has some good comments about Linux's VM, and GNU/Linux desktops, being written by well-funded devs with their big fancy machines, and how GNU/Linux has serious weaknesses on memory-constrained systems. If you have plenty of RAM (relative to what you're doing), you don't have to care about lots of VM behaviour.

          Another aspect of Linux that I remember figuring out was when I wanted to run cycle-accurate benchmarks of a routine I was optimizing. (http://gmplib.org/list-archives/gmp-...ch/000789.html) I ran my benchmark loop at realtime priority, so it would have 100% of a CPU core. When Linux scheduled it on the same CPU that handled keyboard interrupts, it froze the system until it was done. I found out that Linux (on my system, check your own /proc/interrupts) handles all interrupts on CPU0, and you can give a process all of a CPU without breaking your system by using taskset 2 chrt, to put it on the other CPU core. So for this I had to think only about how Linux's scheduler and interrupt handling worked (on amd64 core2duo). I didn't have to think about the network stack, the VM, the VFS, or much else.

          Another time I was curious how Linux decides whether to send a TCP ACK by itself, or let a data packet ACK receipt of packets coming the other way on a TCP stream with data flowing in both directions. I never dug in enough to find out why it decides to sometimes send empty ACK packets, but not always. This behaviour isn't (AFAIK) connected to the scheduler, VM, or much outside the network stack.

          So a lot of the things that are coming to mind that I've wanted to know about Linux have been possible to understand in isolation. Which sounds to me like your "multiple theories". But if by situation you mean workload and machine type, not what part of the kernel you're trying to understand, then I think I tend to understand things in enough detail that those things would be parameters in my mental model, so it's really the same model over all conditions for whatever part of the kernel I'm trying to grok.

          I operate by delving into the details. I love details. (and, since I have ADHD, I have a hard time not getting wrapped up in details, as everyone can probably tell from my posts!) At the level of detail I like to understand, a complete theory of how Linux behaves on a whole range of hardware would be more than I could keep in my head.

          This is maybe why I never saw eye to eye with on your wish for a complete theory that you could just remember that would tell you everything about how Linux worked. I didn't really say anything before, because I couldn't think of a polite way to say that didn't make any sense to me.
          Details are important. Yes, that is true. I have a double Master's degree in Math and another in Computer Science (algorithm theory). All this math has taught me that if you have several theorems that behave almost similar, then you can abstract and make them into one theorem. If you can not, then that theory is inferior and needs to be altered into something more general. Maybe that is the reason I think that one Solaris kernel is prefered, before 42 different Linux kernels depending on the task you are trying to solve. You know, different tools for different tasks is NOT scalability. You can never state Linux kernel is scalable, when you need to use different versions. Solaris install DVD is the same, no matter which machine. THAT is scalability. It is not something we have to agree on, or disagree on. It is a fact. Solaris is scalable, Linux is not. Otherwise, I could equivally say "C64 is scalable"; I just have to modify it. That is simply plain stupid to say so. It is nothing to agree upon or not, it is stupid to say so.

          But certainly you havent studied much math, so you dont understand what I am talking about or why I emphasize that all the time. "But I couldnt think of a polite way of saying that". If you want to get sticky, we can.
          Last edited by kebabbert; 17 February 2009, 07:24 AM.

          Comment


          • #95
            Originally posted by kebabbert View Post
            Maybe that is the reason I think that one Solaris kernel is prefered, before 42 different Linux kernels depending on the task you are trying to solve.
            In what? This is scalability:




            Solaris/Open Solaris isn't scalable and it chokes even on desktop computers (anyone tried to run it on a watch? xd). Maybe that is the reason I think that they call it slowlaris.

            Comment


            • #96
              Originally posted by kraftman View Post
              In what? This is scalability:




              Solaris/Open Solaris isn't scalable and it chokes even on desktop computers (anyone tried to run it on a watch? xd). Maybe that is the reason I think that they call it slowlaris.
              Solaris choking on desktop computers, are you refering to when you tried Solaris in VirtualBox and it paused for 10 secs? You know, in my humble opinion, it is wrong to draw that conclusion you do; that Solaris is slow. I have never had Solaris pause for 10 secs, and Ive ran Solaris for 10 years or so. Ive also never seen Solaris crash. Never seen it happen. The pausing could have happened because of VirtualBox, you never thought of that, did you? You know, VirtualBox is not the most stable product. Especially with solaris involved, VB doesnt work too well. On my laptop, OpenSolaris as a guest in VB, takes 3-4 minutes to move the mouse pointer one inch. I should not say "Solaris is dog slow". If I did, it would be wrong.


              And your links, the "Linux definition of scalability" I dont agree upon them. So you agree that C64 is scalable, right? I can run it on wristwatches to supercomputers, I just have to reprogram the whole kernel each new machine. If you ask me, that is not scalability. C64 is not scalable.

              Comment


              • #97
                Originally posted by kebabbert View Post
                Solaris choking on desktop computers, are you refering to when you tried Solaris in VirtualBox and it paused for 10 secs? You know, in my humble opinion, it is wrong to draw that conclusion you do; that Solaris is slow. I have never had Solaris pause for 10 secs, and Ive ran Solaris for 10 years or so. Ive also never seen Solaris crash. Never seen it happen. The pausing could have happened because of VirtualBox, you never thought of that, did you? You know, VirtualBox is not the most stable product. Especially with solaris involved, VB doesnt work too well. On my laptop, OpenSolaris as a guest in VB, takes 3-4 minutes to move the mouse pointer one inch. I should not say "Solaris is dog slow". If I did, it would be wrong.


                And your links, the "Linux definition of scalability" I dont agree upon them. So you agree that C64 is scalable, right? I can run it on wristwatches to supercomputers, I just have to reprogram the whole kernel each new machine. If you ask me, that is not scalability. C64 is not scalable.
                Any system I tried worked perfectly in vbox, but there's another reason why I said so :> It seems that some of you are trying to move "battlefield" away from Solaris You can run C64 on supercomputers under emulator not natively (and C64 isn't comparable to any modern OS). The point is that Linux runs natively. Btw. what I see there are at least two definitions of scalability and probably both are correct.

                P.S. GNU/Solaris (Open Solaris) is quite interesting, but I don't understand why Sun, Open Solaris makers don't compile it with recommended flags? They should release optimized x86_64 version in my opinion. Phoronix wouldn't be cheating so much then

                Comment


                • #98
                  No. This was about replacing so called 'semaphores' (actually, Linux' implementation of semaphores) with so called 'mutexes'. Spin locks are still the fundamental synchronisation mechanism.

                  Originally posted by kraftman View Post
                  Is this what you said based on some articles or you spent a while on searching lkml? :> Aren't you talking about problem with crappy malloc library?

                  In this article is mentioned about pthreads mutex (it's 2000) and RTLinux:



                  Mutexes are important for RT aren't they?
                  Again, it's about _userland_ (pthreads) mutexes, which are completely unrelated to kernel synchronisation.

                  Comment


                  • #99
                    Originally posted by kraftman View Post
                    Any system I tried worked perfectly in vbox, but there's another reason why I said so :> It seems that some of you are trying to move "battlefield" away from Solaris You can run C64 on supercomputers under emulator not natively (and C64 isn't comparable to any modern OS). The point is that Linux runs natively. Btw. what I see there are at least two definitions of scalability and probably both are correct.

                    P.S. GNU/Solaris (Open Solaris) is quite interesting, but I don't understand why Sun, Open Solaris makers don't compile it with recommended flags? They should release optimized x86_64 version in my opinion. Phoronix wouldn't be cheating so much then
                    No, I am not trying to move the battle field. I am only saying that give me any OS, and I claim it is possible to reprogram it so it runs on whatever machine you want. For instance, I could heavily reprogram C64 to run on a large cluster. According to you, then C64 is scalable - because it runs natively on large clusters. I dont agree with your definition, that is not scalability.

                    And of course Linux guys find GNU/OpenSolaris interesting. I know Solaris guys that hate OpenSolaris. "It is not Solaris anymore" they say. It is GNU, with Solaris kernel. Of course Linux guys like Ian Murdock and his Debian, so they probably find OpenSolaris easier to like than Solaris. I personally think OpenSolaris is more Linux than Solaris. It is more Linux userland than Solaris. I am sceptical to OpenSolaris. Actually, Ive never tried OpenSolaris. Ive installed it in VB, but it took 5 min to move the mouse, so I just shut it down. I myself prefer Solaris.

                    And I dont understand what you mean with they "should have OpenSolaris 64 bits"? Solaris has been 64bits for many years. Upon install it chooses between 32bit and 64bit, automatically.





                    Trasz
                    Do you have any proof and links? Then maybe you could settle this discussion once and for all.

                    Comment


                    • Originally posted by kraftman View Post
                      P.S. GNU/Solaris (Open Solaris) is quite interesting, but I don't understand why Sun, Open Solaris makers don't compile it with recommended flags? They should release optimized x86_64 version in my opinion. Phoronix wouldn't be cheating so much then
                      Gee, not again! kraftman, Phoronix test suite *COMPILES THE SOFTWARE ON ITS OWN* (with an outdated gcc compiler on OpenSolaris), the binaries that come with OpenSolaris are just fine, don't worry. So once again, the grunt was about the test suite generating unoptimized binaries for OpenSolaris: 32 bit, without OpenMP support, etc., while at the same time generating optimized 64 bit binaries on Linux, with OpenMP support. Does that sound fair to you ?
                      Last edited by etacarinae; 18 February 2009, 10:41 AM.

                      Comment

                      Working...
                      X