Announcement

Collapse
No announcement yet.

Linux hacker compares Solaris kernel code:

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by Sergio View Post
    Man, this 'BSDSucksDicks', aka 'LinuxAnalsSolaris', aka 'OpenSLOWlaris', aka 'kraftman' is so fckin hilarious! I feel real sorry for the guy, but then again, his mental illness is just too funny. Of course, because nobody takes this loser seriously we can just agree and laught at him.
    We can agree and laugh at you as well, because of your 'intelligent' and 'mature' replies. Let's make a competition which idiot posts more funny pictures. OpenSLOWlaris is winning at this moment, because his picture represents reality: Linux - and everyone - can do whatever they want with the BSD code. However, I don't know if it is reason to be ashamed, because as far as I know BSD motto sounds: to server others. Btw. is it enough to become a Phoronix member to get a green light for trolling without a fear of being banned?

    Comment


    • #72
      Originally posted by kebabbert View Post
      Ok I believe you. I will stop talking about that SMP and HPC stuff. Clearly my terminology is not really correct. I know what I want to say, but I dont say it correctly.

      I am trying to say that there are no "single, fat" Linux servers with 16 or 32 cpus for sale. This IS true. The problem I have is: what do I mean with "single fat" Linux server? Clearly it can not be translated to "SMP". I need to read a bit and then come back. But my point is true and has always been true: There are no "single fat" Linux servers for sale with 16 or 32 cpus. But there are Linux servers for sale with 1000s of cpus and 10s of TB RAM. It looks like this:

      Linux servers 1-8 cpus for sale. TRUE.
      Linux servers with 512 - 8192 cores for sale. TRUE. (These are clusters)
      Nothing in between. No 16 or 32 cpu Linux servers for sale. Why?

      Linux developer Ted Tso wrote in a blog that until recently, 24-32 cores (not cpus) where considered as exotic hardware that no Linux dev had access to. So, how can Linux scale well on 24 cores, when no one can optimize for such servers?

      I need to define "single, fat" Linux server better. Clearly I can not call them "SMP" servers. All large servers today with 32-64 cpus, mix tech from NUMA and SMP it seems. So I need to stop saying that. But my point remains: there are no Linux servers for sale with 16 cpus. Or 32 or 64. But there are 2048 core servers for sale (they must be clusters).
      1. I assume you are talking about "sockets/board", with the computers you are enquiring about having only one board.

      2. What exactly qualifies as a "Linux server"?
      -Obviously, you are including only systems where it is fully supported by the hw vendor and available with Linux installed; a reasonable move, though fully supported by the distribution would be a better criteria if you're only interested in the capabilities
      -Does it have to be Linux as the default configuration, or only option?
      Because HP has HP/UX and OpenVMS on anything Itanium, and IBM has AIX on anything POWER (except for a few budget configurations that require an open-source OS like Linux or the OpenSolaris POWER port), and SGI and Cray offer Windows HPC as well as Linux, and Oracle has Solaris, OEL, and "Oracle VM server" (Linux/Xen). I am not aware of any Linux-only vendors. But then, the only current UNIX-only vendor I know of is Apple.
      I'm trying to figure out what makes you exclude the Superdome, because I don't see the logic there. It's offered with RHEL (the industry standard), and using someone else's OS with your hardware is standard in the Linux world.

      3. Ted Tso is very much a desktop developer, and there are thousands of kernel contributors (I mean that literally). It is not reasonable to suppose that he knows about the configurations that SGI, Cray, and IBM kernel developers have access to.



      Each vendor has its own favorite approach to scaling. Oracle scales first by adding more cores per box. IBM scales primarily by boosting the clock and adding blades. SGI scales by adding racks/nodes, and likewise for Cray; they found in the 90's that the most space-efficient route was to put fewer sockets in a smaller system, then put more systems in.
      So you're asking why nobody uses Oracle-style "scale up" configurations with Linux. My guess is that everyone else figured it was a dead end: if I have an M5, I can go up to 32 sockets and that's it, while if I get a comparable multi-node system, I can add more nodes and get much farther.

      Why do you think that the kernel (NOW, not as it was in March 2006 when kernel 2.6.16 from SLES 10 was released) can't handle 8+ sockets when it has been used for 100-core chips like Tilera makes, and the XLPII (MIPS64, 4 threads/core, 20 cores, and somehow they can fit up to 8 cores per socket) is primarily used with Linux?

      Comment


      • #73
        Originally posted by kebabbert View Post
        This server is a HP server made for HP-UX. They have compiled Linux to it, and offer Linux on it. But it is not a Linux server. It is a HP-UX server.
        What does it matter?

        Originally posted by kebabbert View Post
        This benchmark shows that Linux used the same cpus, running at 2.8GHz. Solaris used the same cpus, running at 2.6GHz.
        As others have said, using the same CPUs is not everything. Could still be valid, but doesn't necessarily need to be.

        Originally posted by kebabbert View Post
        I showed a link with benchmarks on same hardware.
        Right, I already forgot.

        This one, right?


        I don't really get it. They don't provide any software information. I mean, it's a java benchmark. Are they even using the same jvm version? I mean, red hat has openjdk6 as default, right? But I only found the description of the oracle system... http://www.spec.org/jbb2013/results/...326-00022.html

        I don't want to say it's all false, but there is a reason papers like these exist:

        The Chair for System Simulation deals with the modelling, efficient simulation and optimisation of complex systems in science and engineering. The main focus is on the design and the analysis of…


        Originally posted by kebabbert View Post
        z/Linux has been ported, but runs ontop z/OS (as I understand it) virtualized. Mainframes are not Linux servers, they are running z/OS.
        Apparently it's not common, but possible without problems. http://en.wikipedia.org/wiki/Linux_o...Virtualization
        No benchmarks though.

        Originally posted by kebabbert View Post
        See? My point is that there are no Linux servers offering 16 or 32 cpus for sale. Why? These servers costs many millions, and clearly there is a market opportunity. If RedHat or someone could sell such a Linux server for only half a million, all large investment banks, telcos, etc would switch at once. But no, there are no such servers. Why is that? You tell me. Nobody does not want to be rich?
        Maybe the demand is just not that big? How many modern 32 or 64 socket server are there, regardless of the operating system?
        And how many use cases are there that require you to use 32+ CPUs on one machine that couldn't be solved for much less money with multiple machines?

        (I don't really know. Just wondering.)

        Comment


        • #74
          Originally posted by ibidem
          Each vendor has its own favorite approach to scaling. Oracle scales first by adding more cores per box. IBM scales primarily by boosting the clock and adding blades. SGI scales by adding racks/nodes, and likewise for Cray; they found in the 90's that the most space-efficient route was to put fewer sockets in a smaller system, then put more systems in.
          So you're asking why nobody uses Oracle-style "scale up" configurations with Linux. My guess is that everyone else figured it was a dead end: if I have an M5, I can go up to 32 sockets and that's it, while if I get a comparable multi-node system, I can add more nodes and get much farther.
          There are workloads that are SMP type, so a HPC cluster can not run such workloads.

          Why do you think that the kernel (NOW, not as it was in March 2006 when kernel 2.6.16 from SLES 10 was released) can't handle 8+ sockets when it has been used for 100-core chips like Tilera makes, and the XLPII (MIPS64, 4 threads/core, 20 cores, and somehow they can fit up to 8 cores per socket) is primarily used with Linux?
          Have you looked at the Linux scaling of 8 socket servers? It is really bad. Just because Linux runs on 100 core chips, it doesnt mean they scale well. For instance, Kraftman showed me a link of the 64 cpu HP Superdome designed for HP-UX. They had compiled Linux to it, and the benchmarks was bad. It had like 40% cpu utilization. The HP superdome is sold today with Linux, but the largest Linux configuration for sale is... 16 cpus I think. Linux is not supported on larger Superdome configurations. I can dig a bit if you want to see the link, where it says that the 64 cpu Superdome is only offered with 16 cpus when running Linux.



          Originally posted by ChrisXY View Post
          Maybe the demand is just not that big? How many modern 32 or 64 socket server are there, regardless of the operating system?
          And how many use cases are there that require you to use 32+ CPUs on one machine that couldn't be solved for much less money with multiple machines?
          Some SMP workloads must be run on a "single fat" server. They can not be solved by adding more nodes. The demand is big too, these servers costs millions. And if some Linux vendor could create a cheap 32 cpu or 64 or 128 cpu server, they would sell lots of them. And get rich.

          (I don't really know. Just wondering.)
          Sure, I wonder things too.

          It is very easy to make me shut up: prove me wrong by posting credible links. If someone posts links to say, Oracle or researchers I shut up. If some random guy comes in and say things without links: I am not convinced.

          My point is: Linux scales well on clusters. This is true. For instance Google has a cluster of 900.000 servers and Linux runs on them. But Linux does not scale well on a "single fat" server. This is evidenced by the fact that no Linux server is for sale with 32 cpus. If Linux scaled the crap out of every one else, there would be Linux servers with 32, 64 and 128 and 256 cpus, as well as large clusters. Unix people have always said that Linux does not scale well: they refer to a "single fat" server. Everybody knows that Linux scales well on clusters. That is a fact. Large super computes are just a cluster, such as SGI Altix server.

          Comment


          • #75
            Originally posted by kebabbert View Post
            My point is: Linux scales well on clusters. This is true. For instance Google has a cluster of 900.000 servers and Linux runs on them. But Linux does not scale well on a "single fat" server. This is evidenced by the fact that no Linux server is for sale with 32 cpus. If Linux scaled the crap out of every one else, there would be Linux servers with 32, 64 and 128 and 256 cpus, as well as large clusters. Unix people have always said that Linux does not scale well: they refer to a "single fat" server. Everybody knows that Linux scales well on clusters. That is a fact. Large super computes are just a cluster, such as SGI Altix server.
            Too bad for you, because you didn't backup your claim with strong evidences. I'm repeating my question: what's the difference between CPUs and cores from an operating system standpoint? Furthermore, is it meaningful to make benchmarks using different components? Where is a link to the benchmark of a 64CPUs HP server with only 40% of CPU utilization? And two more question: why most of the vendors didn't follow a SUN's way of making scalable systems? Isn't this, because their method is legacy one and led them to bankruptcy? Why SUN was so weak in horizontal scaling and Linux has beat them in HPC?

            Comment


            • #76
              Originally posted by Pawlerson View Post
              Yeah, right. HTC people care about OS price. It also a fact Solaris has higher memory footprint and introduces higher overhead in comparison to Linux. You don't have to believe me, but you can check this yourself and you can even find about this in google.
              You should notice I'm not denying its memory footprint, nor afirming it. I don't know about it, and I couldn't care less about Solaris. What I'm saying is JUST THE FACT SERVERS MIGRATE doesn't say anything about speed or memory footprint.
              Also, HTC are not the only ones using servers, so if they care or doesn't about OS prices, doesn't matter at all. Industry in general cares about cost/benefits ratio. OS prices might be a negligible difference and thus be ignores completely. Employing costs are usually important, so the fact about being easier to find a Linux admin still matters most of the time.

              Originally posted by OpenSLOWlaris
              Lots of people agree that Linux is Better then BSD/Solaris. If you don't like that that's your own fault.
              Lots of people agree that you'll become blind if you masturbate too much.

              Specially when there are two options with advantages and drawbacks, a lot of people will agree one or the other is better, mostly because it fits better their needs.

              Originally posted by Pawlerson View Post
              Too bad for you, because you didn't backup your claim with strong evidences. I'm repeating my question: what's the difference between CPUs and cores from an operating system standpoint? Furthermore, is it meaningful to make benchmarks using different components? Where is a link to the benchmark of a 64CPUs HP server with only 40% of CPU utilization? And two more question: why most of the vendors didn't follow a SUN's way of making scalable systems? Isn't this, because their method is legacy one and led them to bankruptcy? Why SUN was so weak in horizontal scaling and Linux has beat them in HPC?
              The difference is you use a message passing interface between the computers to distribute the work, and then each computer keeps working as if it's a single one. Each node needs to work with its own OS. This allows you, for example, to use a more specific OS in the nodes, or run it with a real simple scheduler, since it will almost sure just receive a workload, do the work and send the results. The master is the only one which needs a complete and responsive OS. That means you could use a simpler (i.e. less bloated) OS or configuration in nodes when working with clusters, and you have to deal with a complete OS trying to escalate in the single computer, many cores. Clusters doesn't require too much of scaling, actually, since any given node's controls just it's hardware.
              Last edited by mrugiero; 11 May 2013, 11:31 AM.

              Comment


              • #77
                Originally posted by mrugiero View Post
                You should notice I'm not denying its memory footprint, nor afirming it. I don't know about it, and I couldn't care less about Solaris. What I'm saying is JUST THE FACT SERVERS MIGRATE doesn't say anything about speed or memory footprint.
                Also, HTC are not the only ones using servers, so if they care or doesn't about OS prices, doesn't matter at all. Industry in general cares about cost/benefits ratio. OS prices might be a negligible difference and thus be ignores completely. Employing costs are usually important, so the fact about being easier to find a Linux admin still matters most of the time.
                The point was HTC doesn't care about OS cost. It only cares about performance. Solaris was replaced by Linux in HPC.

                The difference is you use a message passing interface between the computers to distribute the work, and then each computer keeps working as if it's a single one. Each node needs to work with its own OS. This allows you, for example, to use a more specific OS in the nodes, or run it with a real simple scheduler, since it will almost sure just receive a workload, do the work and send the results. The master is the only one which needs a complete and responsive OS. That means you could use a simpler (i.e. less bloated) OS or configuration in nodes when working with clusters, and you have to deal with a complete OS trying to escalate in the single computer, many cores. Clusters doesn't require too much of scaling, actually, since any given node's controls just it's hardware.
                This is strange what you have written. If a single Linux kernel is running on a cluster then how can you say any given node controls it's hardware? I would like you to explain this. Furthermore, if such scaling is easy then I don't understand why Linux has no competition in this market? Costs don't matter here.

                Comment


                • #78
                  Originally posted by Pawlerson View Post
                  The point was HTC doesn't care about OS cost. It only cares about performance. Solaris was replaced by Linux in HPC.
                  I don't claim to know about that, but you didn't mention it before my first answer to the thread.
                  I'd like it if you post something that confirms they only care about performance and nothing else, though.

                  Originally posted by Pawlerson View Post
                  This is strange what you have written. If a single Linux kernel is running on a cluster then how can you say any given node controls it's hardware? I would like you to explain this. Furthermore, if such scaling is easy then I don't understand why Linux has no competition in this market? Costs don't matter here.
                  Well, for a start, this message passing interface isn't usually at kernel level, and OpenMPI is portable across OSes. I don't know if this is the most used solution but I believe it is. Most OSes have no clustering capabilities by themselves, but use this kind of software instead.

                  About the single Linux kernel, I guess I expressed it wrong. What I meant is, you only need a full GNU/Linux system in the master. Nodes only needs the kernel, a simple scheduler (they manage few tasks, since most management of the workload is made in the master) and a few basic services (the only ones that comes to mind are networking and the MPI, which have to be supported in all nodes). An OS is needed in every node, but all of them but the master can be pretty basic. The hardware is controlled by each node, and thus you only need to escalate per node. MPI takes care about the bigger picture scaling (how to distribute the workload in the most efficient way). If Linux offers the best performance in computers with few cores, then you'll probably get the best performance using OpenMPI (if my assumption that it's the best/most used solution is valid) with Linux. Also, admins for Linux are probably easier to find, I THINK (but I'm not sure) it's easier to setup, etc. This step becomes important considering you must install an as basic as possible distribution in every node (and AFAIK, Linux is the simplest to do something like that) and a full OS in the master. I guess the main reason most clusters use exactly the same hardware for every node has a lot to do with simplifying the setup of the nodes.
                  Anyway, I don't know enough about the other possible OSes to give accurate reasons why Linux have no competition in that field. I'm just interested in the subject of clustering, so I read a little about it.

                  I must disagree about the importance of costs. Usually, you use clusters because they're the cheapest option to do the work. But since nodes aren't usually touched, licenses are paid only once (AFAIK), you could just hire an admin for whatever OS run in the nodes (the best performant you find) when/if they fail and use Linux (with a full time admin) in the master node, so costs are indeed negligible in the software side for this use.
                  Last edited by mrugiero; 12 May 2013, 03:49 AM.

                  Comment


                  • #79
                    Originally posted by Pawlerson View Post
                    We can agree and laugh at you as well, because of your 'intelligent' and 'mature' replies. Let's make a competition which idiot posts more funny pictures. OpenSLOWlaris is winning at this moment, because his picture represents reality: Linux - and everyone - can do whatever they want with the BSD code. However, I don't know if it is reason to be ashamed, because as far as I know BSD motto sounds: to server others. Btw. is it enough to become a Phoronix member to get a green light for trolling without a fear of being banned?
                    That's actually a funny observation, because all of those users have been banned. For trolling.

                    Comment


                    • #80
                      Originally posted by kebabbert View Post
                      There are workloads that are SMP type, so a HPC cluster can not run such workloads.


                      Have you looked at the Linux scaling of 8 socket servers? It is really bad. Just because Linux runs on 100 core chips, it doesnt mean they scale well. For instance, Kraftman showed me a link of the 64 cpu HP Superdome designed for HP-UX. They had compiled Linux to it, and the benchmarks was bad. It had like 40% cpu utilization. The HP superdome is sold today with Linux, but the largest Linux configuration for sale is... 16 cpus I think. Linux is not supported on larger Superdome configurations. I can dig a bit if you want to see the link, where it says that the 64 cpu Superdome is only offered with 16 cpus when running Linux.
                      That would be nice to see, since the link indicated that larger configurations were/are possible (though the "mx2" modules, which have two Itaniums per socket, were/are not supported on Linux)

                      Some SMP workloads must be run on a "single fat" server. They can not be solved by adding more nodes. The demand is big too, these servers costs millions. And if some Linux vendor could create a cheap 32 cpu or 64 or 128 cpu server, they would sell lots of them. And get rich.
                      ...

                      Sure, I wonder things too.

                      It is very easy to make me shut up: prove me wrong by posting credible links. If someone posts links to say, Oracle or researchers I shut up. If some random guy comes in and say things without links: I am not convinced.
                      Examples of those workloads would be nice. (If you choose to link to anything about them, an explanation of why they must be run that way would be the most helpful.)


                      PS: Phoronix benchmarks from this summer (http://www.phoronix.com/scan.php?pag...ris11_ubuntu12) of Solaris 11 Express vs several Linux distros look like Solaris missed quite a few benchmarks.
                      In particular, note:
                      -the NAS UA.A benchmark, page 2
                      -FFTE and BYTE benchmarks, page 3
                      -the Himeno benchmark, page 4
                      -C-Ray, page 5 (note that this is "less is better", so Solaris is by far the worst for it)

                      I would be curious to see RHEL/CentOS/SL vs OEL vs Solaris benchmarks on the same Oracle hardware. Any of the Linux distros vs Solaris would be informative, but OEL vs Solaris is the most interesting.

                      Comment

                      Working...
                      X