Announcement

Collapse
No announcement yet.

Linux vs Solaris - scalability, etc

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    This makes sense. Google have stripped out everything from Linux kernel, and it is hardly Linux anymore on their big 900.000 server cluster. If you want to do something highly specialized to reach max performance, then it makes sense to strip out everything else. For instance, you dont need fancy I/O, complex scheduler, device drivers, etc.
    I am sure they leave the kernel as stock as possible and just edit what they absolutely need to. The more 'customizing' you do the more work it takes to maintain it. So I expect that the kernels and userland that they use on HPC is, in fact, very standard stuff. They just won't install a bunch of extra software like a typical Linux desktop or Linux server.

    I know that Google does a lot of customization, but I expect it's mostly going to be for their proprietary file systems and to cut some overhead out of certain key parts.

    However, for instance, the IBM Blue Gene on top500 that runs Linux, does not really run Linux.
    The applications they use on HPC are generally batch managed from central nodes and use special libraries that handle the message passing. The Linux side of things doesn't really do a whole lot more then just run the hardware and execute the code. Most of the work done is at the application level.

    On systems like Bluegene I believe they use Linux clusters to manage the execution of jobs for the rest of the cluster. So it's going to be something like they will have a 1/4 of the cluster or a separate small cluster of machines that manage monitoring job execution, scheduling, and load balancing for the rest of the cluster. That would effectively be the user interface for the rest of the system.


    What I trying to say is that something does not add up. Something is strange. Dont you agree?
    Linux supporters: Linux handles SMP just fine.
    Me: Why dont Linux go for the big money then?
    Linux supporters: because we are doing charity work.
    Me: Say again?
    I think you are confused.

    The majority of people that do work on the Linux kernel and most other systems are professionals that are paid to do it for a living. Linux is big business now and makes a lot of people a lot of money as a platform to run other software and services.

    Comment


    • #22
      Originally posted by drag View Post
      I am sure they leave the kernel as stock as possible and just edit what they absolutely need to. The more 'customizing' you do the more work it takes to maintain it. So I expect that the kernels and userland that they use on HPC is, in fact, very standard stuff.
      I am not as sure as you. The easiest would be to rip out everything, and keep the minimum. I can not believe that those highly specialized supercomputers are upgrading the Linux kernel every now and then. Big servers seldom upgrade, and tend to avoid that.

      It is far easier to rip out everything but the minimum and tailor it to do only what is needed. When you try to eek out every ounce of performance, you optimize heavily and rip out everything. I am very convinced that those tailor made huge HPC servers are using their own highly stripped out Linux. No one runs a custom Linux, I very very strongly suspect.



      On systems like Bluegene I believe they use Linux clusters to manage the execution of jobs for the rest of the cluster. So it's going to be something like they will have a 1/4 of the cluster or a separate small cluster of machines that manage monitoring job execution, scheduling, and load balancing for the rest of the cluster. That would effectively be the user interface for the rest of the system.
      Yes, Linux is used to control the compute nodes. According to the creators of IBM Blue Gene super computer. How common is this? Very?



      I think you are confused.
      The majority of people that do work on the Linux kernel and most other systems are professionals that are paid to do it for a living. Linux is big business now and makes a lot of people a lot of money as a platform to run other software and services.
      Of course I am confused, Linux supporters say things that dont add up. Something is strange.

      I know there are lot of people living on Linux. That does not mean those people are willing to do charity work. They want to earn money, and get rich. Linux companies most likely want to get after the high end market where you can earn lot of money, if the opportunity is there - but it is not. Unix have for long tried to go after IBM Mainframes, but there are Mainframe people saying that Unix does not cut it yet. Why dont people here suggest the Mainframe companies go for a 4096 core Linux Altix server instead, that cost a tiny fraction of a Mainframe? Maybe it is not possible? Of course, if we talk about CPU performance, then any x86 server can replace a Mainframe. But Mainframe are good at I/O, very good RAS, backward compatibilty - and that is different from cpu.




      Regarding if Linux has bad I/O, I am not really convinced on this. I believe that Linux has good I/O. You can always tailor Linux to do one thing well, for instance: get good I/O, etc. But for Linux to do general purpose, and be good at everything - that is hard: SMP workload. Linux is like a GPU, fast on one thing. Linux is not like an CPU, can do everything.

      Comment


      • #23
        The easiest would be to rip out everything, and keep the minimum.
        No, the easiest is just to leave it alone and spend time on stuff that actually matters. Stripping down a Linux kernel and bash shell environment isn't going to make your systems run faster. You may save a couple MB in ram, but that's about it.

        I can not believe that those highly specialized supercomputers are upgrading the Linux kernel every now and then
        What I said has nothing to do with how often they upgrade their systems.

        How common is this? Very?
        I don't know for certain.

        Most of the clusters you see on the top500 use mostly stock Linux on commodity servers and use 1GB and 10GB ethernet. Pretty standard stuff, believe it or not. That is the most cost effective way to get lots of CPU power.

        But these types of cluster are not appropriate for all workloads. The bottleneck is going to be the interconnects. The latency and amount of information they can share between compute nodes is limited. So if there is a lot of message passing or a large amount of data that needs to be processed then commodity-based clusters won't be as effective.

        More exotic systems like Bluegene are used in these other cases. They are much more expensive and have _generall_ have less overall capacity (since they are more expensive), but if you need the bandwidth or low latency performance then that is what you need.


        I know there are lot of people living on Linux. That does not mean those people are willing to do charity work. They want to earn money, and get rich.
        Well that is sort a true. Companies that spend a lot of money on systems that move to Linux often change their investment from sending money to corporations to spending money on expertises or participation in Linux development. Many just are able to do the same amount of work using Linux as the old expensive systems, but at a fraction of the cost. So they are able to save a lot of money by moving to Linux.

        Linux companies most likely want to get after the high end market where you can earn lot of money, if the opportunity is there - but it is not.
        There are lots of different 'high end' markets. Depends what you are talking about.


        Unix have for long tried to go after IBM Mainframes, but there are Mainframe people saying that Unix does not cut it yet.
        Mainframes handle completely different workloads then Unix servers or HPC clusters.

        But Mainframe are good at I/O, very good RAS, backward compatibilty - and that is different from cpu.
        That's very true. (although clusters with failover capabilities have significant RAS advantages for a lot of purposes)

        My Core 2 Duo laptop has more CPU capabilities then the average mainframe currently in production. But Mainframes have I/O capabilities that dwarf even relatively expensive Unix server systems (say a couple hundred thousand dollar system).

        Also another thing to keep in mind that is that with legacy Unix systems and Mainframe systems have a significant investment in applications. For a business that has a core application that runs on a mainframe or proprietary Unix system then it can be very expensive to migrate to different type of system. In a situation like that the cost of the hardware and support is minor compared the possible impact to business and developer time.

        Comment


        • #24
          Originally posted by kebabbert View Post
          My point is that most Linux development work is towards smaller systems. Not big SMP servers.
          And? This is like arguing that because Solaris 11 has an updated graphics stack with KMS/DRI2/whatnot => Solaris doesn't scale.

          We all know that the big money is in high end Enterprise SMP servers with high margin.
          A high-margin/low volume market can be profitable, but the incumbent(s) are also very vulnerable to entrants with a high-volume/low margin business model. Especially so in computing where R&D costs dominate and the unit cost is very low (or more or less zero for software).

          There are not lot of money in HPC. There are not too many customers doing HPC, only few.
          And? Does that imply that General Motors should switch to producing cocaine, since that has a much higher margin than making cars?

          Well, Linux does well on HPC workloads on large NUMA machines because Linux is a strategic platform for SGI. If Linux couldn't handle the workloads their customers throw at it, they would go bankrupt. IBM/HP/Oracle are not as eager to push Linux performance for DB workloads on large NUMA systems, because they want to protect their own high-margin/low-volume business as long as possible.

          And why is that Linux is replacing small servers, just as Windows does? There are small margins there.
          Because the volume more than makes up for the lack of margins.

          Why not target the high profitable high end expensive servers? You know, IBM Mainframes bring in loooooot of money. Why dont Windows 2008 target IBM Mainframes? You know that MS likes to make money. Why dont MS go after high end servers that costs 35 million USD? Do you really believe that Windows servers can replace Mainframes? Maybe High end Enterprise is out of reach for Windows?
          Because market entry is difficult and expensive for various reasons, and while the margins may be high the volumes are quite low so it's questionable if there's any profit left after deducting R&D expenses. Technically, both Linux/x86 and proprietary Unixes get the job done, in most cases about equally well.

          I am well aware that Windows and Linux is mostly low end servers. Larry Ellison said that "Linux is for lowend, Solaris for highend".
          After spending umpteen billions on a more or less bankrupt Sun, I'm not sure Ellison is the most objective observer we can find..

          The question is, if Linux scales so well and are suitable for high end SMP servers, why do desktopOS Linux go for lowend market?
          Huh? Linux goes wherever it's users and developers go. There is no single guy in a suit that determines which market Linux should focus on.

          Larry Ellisson said that he does not care if Oracle lowend x86 servers go to nil, he is interested in the high margin big contracts. He said that.
          So? As CEO of Oracle, it's his prerogative.

          Do you mean that all Linux companies are charitable and avoid to nag Larrys wallet to be nice and kind? "Well, Larry needs the money more than we do, let us not go after the 35 million USD for a single Unix server, why would we? We are more interested in selling low cost Linux servers so we dont earn money and maybe have to fire people. We are not interested in getting rich, we are doing charity work"

          Do you really believe Linux companies avoid high end profitable market of free will?
          I believe they have analyzed the situation and come to the conclusion that there's not that much profit to be made in that market.

          Do you really believe that Linux can whenever they want, snatch the high end market at any time?
          No, I don't think there will be any sudden change (vendor lock-in being a big factor, for one). I believe that the Linux, Windows, and x86-64 will slowly but surely continue to eat the high end market from below, as they have done for the past 20 years. The economics are just too compelling to ignore.

          Are you serious? Where are the big 64cpu Linux SMP servers, made by Linux companies?
          Beyond SGI, or why do you bring this up? As I'm sure we all know, most Linux companies are software companies. They don't make HW, duh. And of course, a large part of the argument for using Linux servers in the first place, is to be able to pick cheap of the shelf x86 hardware, rather than whatever overpriced stuff you need to run some proprietary Unix.

          Why would anyone spend many millions on a single Unix SMP server with 64 cpus
          To be honest, spending 35 million for a single 64 CPU machine sounds moronic. Perhaps the CIO is golf buddies with some vendor representative?

          Comment


          • #25
            Originally posted by drag View Post
            I am sure they leave the kernel as stock as possible and just edit what they absolutely need to. The more 'customizing' you do the more work it takes to maintain it. So I expect that the kernels and userland that they use on HPC is, in fact, very standard stuff. They just won't install a bunch of extra software like a typical Linux desktop or Linux server.

            No, the easiest is just to leave it alone and spend time on stuff that actually matters. Stripping down a Linux kernel and bash shell environment isn't going to make your systems run faster. You may save a couple MB in ram, but that's about it.
            Ok, I googled a bit on this because it seemed strange to leave lot of unnecessary code in, when you try to break world records. You want to optimize heavily. And increase stability (as a research paper said) and gain memory (said the same paper)

            And correctly, there are several research papers on HPC systems that talk about stripping down the Linux kernel. Here is one research paper on HPC systems.

            In order to achieve higer performance, many HPC systems run a stripped-down operating system kernel on the compute nodes to reduce the operating system "noise". The IBM Blue Gene series of supercomputers takes this a step further, restricting I/O operations from the compute nodes.
            So, it seems that it is not correct that
            "I am sure they leave the kernel as stock as possible and just edit what they absolutely need to...So I expect that the kernels and userland that they use on HPC is, in fact, very standard stuff."

            It seems to be stripped down, heavily modified kernels in many HPC servers. In some cases, they dont even run Linux, but a only thin kernel that can only calculate because IBM says: "disabling support for multiprocessing and POSIX I/O system calls" on the compute nodes. The research paper also says:
            "As of November 2009, five of the top 20 systems in the Top 500 list [19] and thirteen of the top 20 most powerefficient systems were based on the Blue Gene solution [7]"



            Most of the clusters you see on the top500 use mostly stock Linux on commodity servers and use 1GB and 10GB ethernet. Pretty standard stuff, believe it or not. That is the most cost effective way to get lots of CPU power.
            If you read some research papers, it seems that the problem is not bring up CPU power. The problem is to bring down the power wattage. Riken, no 1, SPARC server uses 12,7 MegaWatt. Several super computers are using 750 MHz cpus. Why? Because they are energy efficient. When you have 500.000 cores, there are other considerations. But I will not say "believe it or not", you can google for yourself. Hence, many are not using the latest fastest cpu becuase they are too energy hungry. Often they use weaker, but more efficient cpus. Or, if you wish, I can post some research papers here?



            There are lots of different 'high end' markets. Depends what you are talking about.
            "high end market where you can earn lot of money" - I said. HPC are not lucrative. Sun was interested in HPC market. Oracle is not, and shutting down HPC. The reason? Too specialized, too few customers, says Larry. The big bucks are in High end Enterprise at companies. Oracle database costs a fortune on big servers. One single frickin IBM 32 cpu server at $35 million USD, list price. Come on, ONE SINGLE SERVER. Why would Linux Altix servers not go after those systems?

            Imagine a phone call to a manager at a big Wall Street bank:
            -Hi, you invested millions of USD and got only 32 cpus. I can sell a Altix 4.096 core server for a fraction of that. Interested?
            -No, we rather spend millions on slow SMP servers with only 32 cpus, just because we think it is fun. I am not interested in your offer where I can save huge amounts of money, and increase performance 50x by getting many more cpus than measly 32 cpus.

            Is this realistic? Something does not add up, does it? Are the Wall Street banks not interested in saving money and increase performance 50x?



            My Core 2 Duo laptop has more CPU capabilities then the average mainframe currently in production. But Mainframes have I/O capabilities that dwarf even relatively expensive Unix server systems (say a couple hundred thousand dollar system).
            It is very true that a Core 2 Duo laptop have more CPU than an average Mainframe. You can emulate a Mainframe on a laptop with "TurboHercules":



            '
            Also another thing to keep in mind that is that with legacy Unix systems and Mainframe systems have a significant investment in applications. For a business that has a core application that runs on a mainframe or proprietary Unix system then it can be very expensive to migrate to different type of system. In a situation like that the cost of the hardware and support is minor compared the possible impact to business and developer time.
            I know several big companies that are planning to get off Oracle's database, it is too expensive they say. If you save millions, you do that. Slowly. Linux has existed for 20(?) years now. And still no one have migrated big SMP 64 cpu servers to Linux.
            Last edited by kebabbert; 16 November 2011, 11:54 AM.

            Comment


            • #26
              So, it seems that it is not correct that
              "I am sure they leave the kernel as stock as possible and just edit what they absolutely need to...So I expect that the kernels and userland that they use on HPC is, in fact, very standard stuff."

              It seems to be stripped down, heavily modified kernels in many HPC servers. In some cases, they dont even run Linux, but a only thin kernel that can only calculate because IBM says: "disabling support for multiprocessing and POSIX I/O system calls" on the compute nodes. The research paper also says:
              "As of November 2009, five of the top 20 systems in the Top 500 list [19] and thirteen of the top 20 most powerefficient systems were based on the Blue Gene solution [7]"
              You're mixing things up.

              Are you talking about just Blue Gene and a few very specialized top500 machines?

              Or are you talking about the bulk of the top500 systems?

              These are two very different things. When you are asking about 'most HPC' what I say holds. If you are talking about Blue Gene and the other top tier systems then they are a very different beasts.

              If you read some research papers, it seems that the problem is not bring up CPU power.
              What researchers talk about and what people actually do are two very different things.

              This will give you a idea:
              Gigabit Ethernet 224 44.8 % 14276444.4 26754468.5 2510404
              Infiniband 209 41.8 % 28689327.2 41678538.3 2867822
              Custom Interconnect 29 5.8 % 17845186 20520144.7 1529992
              Proprietary Network 22 4.4 % 9859604 14303556.1 1810468
              Cray Interconnect 6 1.2 % 2561153 3325846.6 358016
              Myrinet 4 0.8 % 412391 553262 59885
              NUMAlink 2 0.4 % 107961 121241.2 18944
              SP Switch 1 0.2 % 75760 92781 12208
              Mixed Network 1 0.2 % 66567 82944 13824
              Fat Tree 1 0.2 % 122400 131072 1280
              Quadrics 1 0.2 % 52840 63795.2 9968
              Sums 500 100 % 74069633.68 107627649.54 9192811

              So 44.8% are using Gigabit ethernet. Only about 2.8% of these machines are even using 10GB/s, the bulk are using just plain old 1GB/s
              Infiniband 41.8%

              The thing that infiniband and ethernet have in common is that they are cheap and off the shelf. They just use regular old cpus like you'd use in your desktop or server. Xeons or whatever.

              The top500 are just a mishmash of different systems. What is typical is not universal.

              "high end market where you can earn lot of money" - I said. HPC are not lucrative. Sun was interested in HPC market. Oracle is not, and shutting down HPC. The reason? Too specialized, too few customers, says Larry. The big bucks are in High end Enterprise at companies. Oracle database costs a fortune on big servers. One single frickin IBM 32 cpu server at $35 million USD, list price. Come on, ONE SINGLE SERVER. Why would Linux Altix servers not go after those systems?
              Why do you think that everybody that can is running away from these million dollar machines screaming? Why do you suppose the 'high end' Unix market has done nothing but shrink?

              I don't know what to say to you. You are all over the map here and talking about all these unrelated systems as if you have some sort of point.


              As far as Oracle goes... Oracle doesn't give a shit about Sparc, Solaris, Linux, or anything else. High end, low end, clustering, blah blah blah. It's all irrelevant. They will provide what customers want to see, but that is it.

              What Oracle cares about is:
              * Java
              * Their vertical application stacks that run on Java
              * SQL database software needed for those applications.

              Systems like Solaris are just going to continue to die a slow slow painful death over the next decade or two.

              Comment


              • #27
                Originally posted by kebabbert View Post
                Again, I compare hardware that is roughly the same in performance.
                No, you don't.

                There is not a big difference in performance between the Solaris and Linux servers. When Phoronix does the benchmarks, Phoronix uses the same hardware. When TomsHardware does GPU benchmarks, the rest of the hardware stays the same: same cpu, same mobo, etc. What would people say if TomsHardware compared latest Nvidia + latest x86 cpu, vs an old ATI + Pentium 4? Would this be fair? No. But why do you keep doing this every time?
                You're doing this. You compared system with double amount of RAM, faster database and that system was much more expensive, so the hardware was better overall. Why do you keep doing this every time?

                You have several times compared a 3x faster Linux server to an old Solaris server. Is this fair?
                And you have done what I wrote before. Is this fair?

                I have seen benchmarks of 32 cpu Solaris servers, but I would never compare such a Solaris server to a dual cpu Linux server and say "this proves Solaris is faster". That would be fanboyish of me to do. Biased. I want to learn and see which system is best. Therefore I want to have a objective comparison to learn. I do not want to see propaganda, I do not want to see 64 solaris servers vs a 2cpu Linux server - that would say nothing.
                While you don't want to see propaganda why do you consider I want to see it? Your revelations are simply propaganda.

                If we discuss Linux vs Solaris, then we need to have roughly similar hardware. I think it is strange that you dont agree on this? Do you think the latest NVidia vs an old ATI 4850 would be fair? Strange.
                It's you who have started to compare different systems and now you're asking why I'm doing this. That's strange.

                It is well known that different Vendors charge different prices. For instance, IBM charges 100s of millions USD for their Mainframes. And the IBM Mainframes are really slow cpu wise. An 8-socket x86 server is almost as fast as the biggest IBM Mainframe with 24 cpus.
                It is well known much more expensive hardware uses better components.

                The price is not important, neither the brand. What is important is that the hardware is similar. If both servers are using 6-core AMD Opteron 8435 cpus then that is good. You are comparing the worlds fastest x86 cpu vs an old AMD Opteron. The Intel cpu is 3x faster. Is this fair?
                Why you're saying it's important to have similar hardware why were you showing me comparison between non similar systems?

                Regarding my claims that 10-core Westmere-EX gives a huge advantage, yes, I have already showed you the SAP benchmarks where Westmere-EX was 52% faster than an old 12-core AMD cpu. The older AMD Opteron 8435 has 6 cores, so it has less than half the performance compared to this 12 core AMD cpu. Thus, the Westmere-EX is 3x faster than the old AMD Opteron 8435 that Solaris used.
                Great, but we're talking about 40cores compared to 48cores. Do you have something like that?

                Do you really think it is fair to use 3x faster cpus in a comparison? The Westmere-EX was released this year, the server is brand new with new DRAM technique, etc. I really dont understand your concept of "fairness"? Why do I have to explain these things, as to a small child?
                You behave like a child and it seems you don't understand different things. When you show different systems it's OK, but when other's do this then it's NOT. Isn't this childish?

                Regarding if Solaris is bloated or not, it does not matter, as long as Solaris are winning benchmarks with similar hardware. Regarding if Solaris is bloated, it is only in your mind, you have never showed any links to prove that. I have asked you many times. Regarding if Linux bloated, several Linux kernel developers have said that including Linus Torvalds. Unless you call Linus Torvalds a liar, he speaks true and Linux is bloated. But maybe Linus Torvalds and all other kernel devs are lying?

                1) You have numerous times said that Solaris is bloated, but have never showed any links nor proof. I have many times asked for more information on this. So I would like you to prove this now. If you never have read this or heard this, why are you saying this? Are you FUDing? Earlier, you confessed you FUD. Is this just more of your FUD?
                Oh, it does matter. Its bloat is one of the reasons slowlaris looses in benchmarks. I explained you about Linux. When comes to slowlaris while its highly optimized binaries compared to standard Linux' ones are 30% slower then it suggests slowlaris is hugely bloated.

                2) You have said many times that Oracle is killing Solaris, but you have never showed any links nor proof. So I would like you to prove this now, I have many times asked you. Go ahead. Or is this is also FUD?
                Indeed. They're going to kill the crap. btrfs will be used as default one in Oracle's Linux and btrfs can simply replace zfs. Why should they keep legacy slowlaris with 30% slower binaries when then can use faster Linux?

                3) You have said that Solaris is slow. You have compared 3x Linux faster servers to Solaris, that does not prove Solaris being slow. That only proves that your sense of fairness is strange. Go ahead and prove that Solaris is slow, on similar hardware. I have showed you links where Solaris holds several world records, being the fastest in the world on 4 cpu servers, and similar.
                That proves you don't accept some arguments, but the same time you want others to accept yours which are similar. I don't care about slowlaris records, because Linux has many records.

                Que? HPC and SMP is not the same thing. If they were the same thing, they would not have different names. Here is more info on SMP servers, and HPC servers:



                HPC servers are not general purpose servers, they are specialized and do only one thing well: calculations. Everything is ripped out from the Kernel.
                SMP servers are general, and can be used for everything, including calculations.
                HPC servers are similar to a Graphics Card, they are very fast on calculations but very weak on general purpose. CPUs are general purpose and can be used for calculations, but Graphics card are faster for calculations.
                Thus, CPU can do everything, including calculations.
                Thus, GPU can only do calculations and nothing else.

                SMP can do everything, including HPC work.
                HPC can only do calculations and nothing else.
                That's funny piece of text. Thankfully we know SMP can be done on HPC systems. We also known the biggest SMP (NUMA actually) are running Linux. This proves Bonwick did FUD like one of the Linux devs has said.

                Now you say again that Solaris is bloated. Again, prove this. You can not just make up negative things without any evidence, because the definition of FUD says:
                http://en.wikipedia.org/wiki/Fear,_u...inty_and_doubt
                I don't FUD.

                OSDIR covers the entire spectrum of technology and brings its readers closer to the world of digital experience.

                OSDIR covers the entire spectrum of technology and brings its readers closer to the world of digital experience.


                There are things in Solaris that are slower than other
                operating systems. We haven't spent a lot of time optimizing small
                process fork performance, and many of our very short cmds do
                more localization processing on startup than is actually necessary.
                So, even person from "Solaris Kernel Performance" agrees with me.

                4) In short, if you are just saying negative and false information to undermine Solaris, then you are FUDing. Let me ask you, are you FUDing now? Or are you telling the truth? If you are telling the truth, then you can show us links that prove you are speaking true.
                In short I'm saying truth about slowlaris.

                This Linux Big Tux server you speak of, is the HP Unix Superdome server just as I mentioned. HP have just recompiled Linux onto the HP Unix server. This Superdome server uses nPar, that is, partitioned into virtual servers consisting of 4 cpus and RAM, or 6 cpus, etc. Thus, you can carve up several small servers and run different OSes on this Superdome server. For instance, you can run Windows in one nPar, and at the same time run Linux in another nPar. Now, this Big Tux server you talk about, is Linux running in one nPar. This Linux nPar can at most have 16 cpus if you cluster two nPars 8 cpus.

                Thus, you can not run Linux on this server and use 64 cpus. The biggest server you can run Linux on, is in nPar with a clustered 8-cpu nPar. The Linux installation on this Big Tux server is at most 16 cpus (using a cluster).




                It is the same document, I think. At the bottom of page 5 and top of page 6, it says
                "64 Processors / 128 Cores
                Maximum nPars 16 (if you use two 8-nPar clusters)"

                So again, there are no big SMP Linux servers on the market. Sure, you can take Linux and recompile it on a big Unix SMP server from Solaris, IBM or HP. But there are no big SMP Linux servers. And, frankly, I suspect that Linux running on 8 cpu nPar have problems using all 8 cpus well. Because this is a SMP server, and general purpose. As we have seen, Linux excels on doing only one simple task: calculations. But as soon as there are more complex work loads, then Linux has problems.


                We know you're wrong. It was NUMA system and you learnt something new about this. Instead Big Tux it's enough to mention SGI to show how scalable Linux is.

                Another time, so much of text wasted.

                Thus Jeff Bonwick was right; Linux scales bad on SMP servers. But everyone agrees that Linux scales excellent on HPC servers.
                As proven Bonwick was wrong and he did FUD. Everyone knows Linux scales excellent on HPC (including SGI SMP systems )

                You have many times said that I FUD. Prove it, or it is you that FUD about me.
                You said many times that I FUD, but you didn't prove it. Some people have already proven you FUD.

                You reject this test. In this test 16 SSD disks are used, and ZFS is faster than BTRFS. You say it is not relevant because BTRFS is unstable. You ARE rejecting this test, dont you understand?
                "I dont reject this test, but this test is not relevant because BTRFS is unstable" - this is a rejection. Dont you understand what you are saying?
                I'm rejecting it as a final justification which file system is faster, because btrfs isn't stable yet, so this benchmark doesn't show the final btrfs performance.

                You also accepted test of unstable OpenSolaris vs Linux, in Phoronix benchmarks.
                30% slower binaries suggests it doesn't really matter if you're benchmarking final or not slowlaris version. Like I said there were probably "stable" versions benchmarked, too.

                When Linux is unstable, you reject the tests. When Solaris is unstable, the test is good and fair. What is the matter with you Kraftman. Are you serious, or are you joking? You also compared several times 3x faster Linux servers to old Solaris servers? Kraftman? What is going on? Have you been joking all the time?
                No, I'm not joking. You have agreed Linux is faster on smaller systems, so why it does matter if you benchmark stable or not stable solaris version while (like you agreed) it's slower? Btw. there were non final Ubuntu versions benchmarked sometimes.. I wonder if you're serious or not?

                If I FUD, then quote me. Cite a post where I FUD and disprove my post by a link. If you can not, you are just about me, dont you agree?
                It was proven you FUD.

                But I will tell you Kraftman, you have chance to prove you are not FUDing now. Prove every negative claim you said about Solaris and about me, and I will agree you are not FUDing. I have always showed links to every criticism on Linux I have cited from Linux kernel devs. I have not made up anything, I have not written "false" information - everything is true. Linus Torvalds did say Linux is bloated, I am not lying on this. Everything I said, I can show links to. Thus, I am not FUDing. So, go ahead Kraftman, prove that you are correct, and prove that I have been lying and FUDing - disprove me. Go ahead, show links.
                I disproved you many times. In example when comes to Linux being bloated, but you still don't get it.

                Regarding a fine price, I dont know. If you call a dinner a fine price, then go ahead. But you have 4 questions to answer now. Question 1) 2) 3) and 4). I am waiting for your answers, Kraftman.
                I hope you'll try something new next time, because it's boring and too easy.
                Last edited by kraftman; 16 November 2011, 03:57 PM.

                Comment


                • #28
                  Originally posted by kebabbert View Post
                  I am flattered that you invest lots of energy and time to learn more about me, Kraftman. Do you think of me, often? If you want to know more things about me, you can just ask me instead of googling around.
                  I spent up to one hour in these forums, so I don't waste too much energy on you. The links are here just to show you're argumentation is the same in different places too and to show your relation with SUN. So bad sun's dead.

                  Comment


                  • #29
                    Originally posted by kraftman View Post
                    I spent up to one hour in these forums, so I don't waste too much energy on you. The links are here just to show you're argumentation is the same in different places too and to show your relation with SUN. So bad sun's dead.
                    Sun wasn't too bad, its Oracle who has made it their hobby to get in the worlds' face after acquiring them. They hopped on a sinking ship and are going the way of the SCO. Friggin trolls.

                    As for this guy who's going on and on about scalability:

                    -You are generalizing scalability between several, very differently built systems, built for very different purposes and using that as evidence for Solaris superiority. Linux has blanket market share here, that is reality.
                    -In your defense, many sysadmins agree: "If you care about your data it is on ZFS." ZFS is solid, possibly the best product to ever emerge from Sun. That doesn't mean that the hardware will survive a natural disaster. Linux filesystems on RAID with enough redundancy will grant you the same amount of reliability and performance as ZFS- just in ZFS' case, you just need less redundancy. Big friggin deal when you're spending a few million on a number crunching, I/O munching server.
                    -Finally, you seem to forget that the kernel and filesystem developers develop kernels, filesystems. They're designed to be versatile for common use cases, not exotic use cases. How they are used is up to the implementer and users/sysadmins will always surprise you with new use cases, some of which you cannot readily test.

                    Comment


                    • #30
                      Originally posted by kazetsukai View Post
                      -You are generalizing scalability between several, very differently built systems, built for very different purposes and using that as evidence for Solaris superiority. Linux has blanket market share here, that is reality.
                      I agree that Linux dominates on the HPC market, but there are no bigger Linux SMP servers out there. That is reality.


                      -In your defense, many sysadmins agree: "If you care about your data it is on ZFS." ZFS is solid, possibly the best product to ever emerge from Sun. That doesn't mean that the hardware will survive a natural disaster. Linux filesystems on RAID with enough redundancy will grant you the same amount of reliability and performance as ZFS- just in ZFS' case, you just need less redundancy.
                      I dont share your view point. Several sysadmins are refusing to let any storage solutions such as ZFS, younger than a decade into their Enterprise Server halls. ZFS is too young and not mature, they say. Just the other week, ZFS became 10 years old actually. ZFS still has bugs today. When BTRFS enters v1.0 it will take many years before it will let into Enterprise server halls.

                      Regarding if Linux filesystems on RAID will grant you the same reliability, no. Research shows that common filesystems such as XFS, JFS, NTFS, etc are not designed to catch data corruption:
                      56% of data loss due to system & hardware problems - OntrackData loss is painful and all too common. Why?

                      OTOH, researchers show that ZFS does catch data corruption. I have links on this, if you want to read more on this.


                      -Finally, you seem to forget that the kernel and filesystem developers develop kernels, filesystems. They're designed to be versatile for common use cases, not exotic use cases. How they are used is up to the implementer and users/sysadmins will always surprise you with new use cases, some of which you cannot readily test.
                      I am not denying that Linux scales on HPC. I am trying to defend Jeff Bonwick (creator of ZFS) that said Linux scales bad. Well, Linux scales bad on SMP servers. But scales excellent on HPC servers.

                      Comment

                      Working...
                      X