Announcement

Collapse
No announcement yet.

Linux vs Solaris - scalability, etc

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by jabl View Post
    And? This is like arguing that because Solaris 11 has an updated graphics stack with KMS/DRI2/whatnot => Solaris doesn't scale.
    I am pointing out that Ted Tso explained recently, that 32ish core systems are considered exotic hardware, and he has recently started to work on such big systems because they will be available soon.

    If the creator of Ext4 did not until now, have access to such big systems, do you expect Ext4 to handle the workload well on many cores? Why is he starting scalability work on as many as 32 cores?

    Ergo, most Linux developers are using desktops. Linux was a Desktop OS, growing into Servers. Just like Windows, it was a desktop OS, trying to conquer serverOS too. Windows have not succeeded yet. Linux fares better though, as Linux is superior to Windows.

    And, as we know, desktops dont have massive amounts of cpu or disks. I find it compelling that Linux kernel devs today, say that 32 core systems are big, and they never had access to such big SMP systems. My point is, Linux are not made for big SMP servers as explained by Ted Tso.

    What has this to do with Solaris updated graphic stack -> Solaris does not scale? Quite strange reasoning?



    And? Does that imply that General Motors should switch to producing cocaine, since that has a much higher margin than making cars?
    Strange reasoning, again. GM does cars and would not easily be able to switch to drugs, they know nothing about it. Cars are very very different from drugs, right?

    People here say Linux scales best in the world, it scales on HPC, it scales on SMP. If Linux chooses to go after SMP servers, that is no big deal. SMP servers are the same as HPC servers, people say. No big difference. Why dont Linux go after HPC and SMP? No, Linux let the big bucks SMP market be. Why? Because "it is vulnerable". Doesnt this explanation sound a bit... strange? They dont want to be millionaries, because... the market is vulnerable. Well, Oracle thrives in that market and that market shows no sign of vulnerability. IBM are there too. HP too. etc etc. Only Linux "chooses" to not go there. Of free will. Strange?




    IBM/HP/Oracle are not as eager to push Linux performance for DB workloads on large NUMA systems, because they want to protect their own high-margin/low-volume business as long as possible.
    This is a possible explanation. However, it does not hold water. Linux is open. There are bound to be other companies that want to push Linux into traditional high lucrative big bucks markets to snatch the millions.
    -Hey boss, I see only expensive IBM/HP/Oralce on the SMP market, and they charge gazillions of USD for a single server with a Database. We should do that too, we can sell a SGI Altix server with 1000s of cores for a fraction of the price. We will sell many many servers. Everyone will evaluate our 50x faster server, for a much cheaper price and then start to buy! Let us slap on Linux on a SGI Altix system with 1000s of cores!
    -No, I dont want to earn money. That high end market has made Oracle/HP/IBM a fortune and employs 100.000s of persons at theses companies, they are vulnerable. They might go bankrupt any time. IBM is the oldest IT company, 100 years old, and it might go bankrupt anytime. Oracle has billions in cash, in his war chest, and they might loose that money anytime.
    -Say again?
    -Yes, it is true. Sun lost all high margin contracts, and was forced to do low margin business which seems much better. Let us do low margin business, just like Sun instead. It is better for us.



    Because the volume more than makes up for the lack of margins.
    Do you believe this yourself? Dont you know that IBM sold off their huge PC division, becuase the margins were to low?

    And HP's CEO Apotheker wanted to sell off the PC division recently, why? It had 4 billion in revenue, and the volume was huge, HP was the greatest PC seller. Answer: Apotheker said the margins were too low.

    High volume, low margin is not good business.



    Because market entry is difficult and expensive for various reasons, and while the margins may be high the volumes are quite low so it's questionable if there's any profit left after deducting R&D expenses. Technically, both Linux/x86 and proprietary Unixes get the job done, in most cases about equally well
    R&D expenses? Are you kidding? Linux is OPEN. EVERYBODY does R&D! You just take a Linux kernel and you dont do any R&D yourself! R&D is free, it costs nothing. Everyone else is doing that. And besides, Linux does not need additional R&D because it scales so well, right?



    I believe [Linux companies] have analyzed the situation and come to the conclusion that there's not that much profit to be made in that [high end] market.
    This is probably the correct conclusion. Yes. Good reasoning. Why dont Linux companies go into big SMP servers? Because there are no profit for them to make. We see that Oracle makes gaziillions of USD. IBM sells one single Unix server with 32 cpus for 35 million USD, list price. Thus, there are LOTS of money to make here. This is how you build a multi billion company with 100.000s of employees. The reason Linux dont go in there, is the same reason as Windows dont go into that market; because Windows does not cut it. If Linux and Windows would be able to handle the workload, quickly they would go into the billion USD market.

    Windows is seld on big SMP systems, the HP Superdome Itanium cpus, but no one used Windows on that server. MS have now stopped development of Windows for Itanium cpus, because HP did not sell enough Superdome SMP systems with Windows on it.



    No, I don't think there will be any sudden change (vendor lock-in being a big factor, for one). I believe that the Linux, Windows, and x86-64 will slowly but surely continue to eat the high end market from below, as they have done for the past 20 years. The economics are just too compelling to ignore.
    You know, it is quite easy to migrate from Oracle DB on Solaris, to Oracle DB on Oracle Linux distro. Vendor lock-in is not a factor, as long as you stay with the same vendor.

    Of course Linux and Windows will continue to eat into the high end market from below. And if Linux / Windows could eat into it from above, they would do that as well. But they cant. Windows can not. Linux can not.



    Beyond SGI, or why do you bring this up? As I'm sure we all know, most Linux companies are software companies. They don't make HW, duh. And of course, a large part of the argument for using Linux servers in the first place, is to be able to pick cheap of the shelf x86 hardware, rather than whatever overpriced stuff you need to run some proprietary Unix.
    Again. If there is a empty niche in a billion USD market, someone will take it. Some software companies that are interesting in making money, will slap on Linux on SGI Altix servers, and sell them for a fraction of the price and become rich, and every Wall Street Investment Bank will run their system.



    To be honest, spending 35 million for a single 64 CPU machine sounds moronic. Perhaps the CIO is golf buddies with some vendor representative?
    It was for 32 cpus, not 64 cpus. IBM sells no Unix 64 cpu server, because IBM has scaling problems. Just recently, the IBM P795 was released, and IBM had to rewrite their AIX for it to be able to scale to P795, because it now has 8 cores each cpu. That is 256 cores, and the old mature Enterprise AIX have never handled that many cores earlier.

    Why do companies shell out millions of USD for a single big SMP server? Is it because they like it? Are they dumb? If Linux can do all that, but 50x faster, for a fraction of the price, why dont everyone migrate to Oracle Linux and Oracle Database instead? It is easy to do.




    We have two explanations why companies dont buy big Linux SMP servers. Which one is more reasonable?

    1) Linux can handle everything, both SMP and HPC. The reason Linux does not snatch the high end lucrative market worth billions, is because it is is not profitable, it is vulnerable. But Linux could take that market, if Linux companies wanted to. But they dont want that. They have better things to do, than to become millionaires. No Linux company does want to become the next IBM or Oracle or HP or Google or Apple. They dont want that. Why? (I dont know, ask the Linux supporters here).

    2) Linux and Windows dearly wants to go into the high end lucrative market, but Linux and Windows does not cut it. That is the reason no one makes big SMP Linux servers. Even if you recompile Linux on an existing big 64 cpu SMP server such as HP Unix Superdome, the biggest Linux configuration is for 8 cpus (16 cpus if you make a cluster of two 8-cpu nPars)

    Which explanation is more reasonable you think?









    After spending umpteen billions on a more or less bankrupt Sun, I'm not sure Ellison is the most objective observer we can find..
    Agreed, he might be biased. But, he also has said that Oracle database is most common on Solaris. Even before he bought Sun, Solaris was the prefered platform for the Oracle DB. This proves he held Solaris in high regard before he bought Sun. Also, Solaris installations are more than HP and IBM AIX installations combined today. Thus, Larry just continues to say today what he thought earlier: "Solaris is for highend and premiere platform for Oracle DB, and Linux for Lowend. Solaris is the best Unix out there"



    A high-margin/low volume market can be profitable, but the incumbent(s) are also very vulnerable to entrants with a high-volume/low margin business model. Especially so in computing where R&D costs dominate and the unit cost is very low (or more or less zero for software).
    If Linux is as good as you say, if Linux can replace big SMP Unix servers, why dont some Linux companies do it? If Linux was that good, some Linux companies would go for lowend, and some Linux companies would go for high end lucrative contracts. But no, there are only lowend Linux contracts. Why is that? And you answer: "high margin is vulnerable, therefore Linux companies avoid to make big money"? Do you believe this, yourself?

    Comment


    • #32
      [QUOTE=drag;238829]You're mixing things up.

      Are you talking about just Blue Gene and a few very specialized top500 machines?

      Or are you talking about the bulk of the top500 systems?

      These are two very different things. When you are asking about 'most HPC' what I say holds. If you are talking about Blue Gene and the other top tier systems then they are a very different beasts.[QUOTE]
      Maybe you missed it, but according to the research paper I quoted earlier
      In order to achieve higer performance, many HPC systems run a stripped-down operating system kernel on the compute nodes to reduce the operating system "noise". The IBM Blue Gene series of supercomputers takes this a step further, restricting I/O operations from the compute nodes.
      The researcher says that "many HPC systems". Not a few.



      What researchers talk about and what people actually do are two very different things.
      This will give you a idea:
      So 44.8% are using Gigabit ethernet. Only about 2.8% of these machines are even using 10GB/s, the bulk are using just plain old 1GB/s
      Infiniband 41.8%
      The thing that infiniband and ethernet have in common is that they are cheap and off the shelf. They just use regular old cpus like you'd use in your desktop or server. Xeons or whatever.
      The top500 are just a mishmash of different systems. What is typical is not universal.
      I dont understand this. You show that the HPC systems lack bandwidth. This is well known, I have talked about this earlier. I quoted researchers that said that I/O was a problem, that is the reason IBM Blue Gene is designing compute nodes without I/O. What is your point? I have never tried to deny that HPC systems lack bandwidth. I have claimed that HPC systems are specialized and stripped down kernel, doing only one thing.

      Supercomputers ... tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks

      many problems carried out by supercomputers are particularly suitable for parallelization (in essence, splitting up into smaller parts to be worked on simultaneously)...For this reason, traditional supercomputers can be replaced...by "clusters" of computers of standard design, which can be programmed to act as one large computer.
      I have, however claimed that HPC systems have other difficulties, one of the greatest problem is power wattage. HPC systems are very different, very specialized and have other problems. To use low power, being high priority. The top no 1, the SPARC server Riken, costs 10 million USD every year in power. It uses 10MW.

      If a supercomputer is too expensive to run, it is useless. Energy consumption is one of the biggest hurdles today.
      KQED provides public radio, television, and independent reporting on issues that matter to the Bay Area. We’re the NPR and PBS member station for Northern California.

      ?The problem is now we can?t make them go any faster. So we can cram more things on the chip, but if you make them go fast, it?s so hot they?ll melt.?

      But I agree that Supercomputers are using commodity hardware where possible, just as you say. In essence, a supercomputer is a cluster, a bunch of ordinary cpus. But, the software is highly specialized and tuned.



      Why do you think that everybody that can is running away from these million dollar machines screaming? Why do you suppose the 'high end' Unix market has done nothing but shrink?
      The reason high end market shrinks, is because Windows and Linux are getting proggressively better, and can intrude more and more. Linux/Windows can handle larger and larger workloads today. When Linux/Windows gets good enough they will venture into high end high margin billion dollar market. But they are not there yet. Not because they dont want to, but because they can not technically.



      I don't know what to say to you. You are all over the map here and talking about all these unrelated systems as if you have some sort of point.
      Well, I do have a point. I have seen some examples of strange reasoning here, without any point at all.



      As far as Oracle goes... Oracle doesn't give a shit about Sparc, Solaris, Linux, or anything else. High end, low end, clustering, blah blah blah. It's all irrelevant. They will provide what customers want to see, but that is it.
      Agreed. Oracle is interested in making money. They dont care about how, actually.



      Systems like Solaris are just going to continue to die a slow slow painful death over the next decade or two.
      If Linux scales to 32 cores after 20 years of development, I suspect big SMP servers will thrive for decades yet. People said Mainframes would die, but IBM is making huge amounts of money on Mainframes today. Maifnrames that are really old, are still thriving. For a reason. An x86 linux could not replace a Mainframe workload. CPU wise, yes. But not I/O wise.

      Comment


      • #33
        Originally posted by kebabbert View Post
        If the creator of Ext4 did not until now, have access to such big systems, do you expect Ext4 to handle the workload well on many cores?
        "the workload"? What workload?

        But no, I don't think, and I've never claimed, that ext4 has some ?ber-good scalability. The SGI Altix systems, FWIW, use XFS AFAIK. XFS has been designed to perform well on the workloads SGI customers run.

        Why is he starting scalability work on as many as 32 cores?
        Why not? What's so special about 32?

        Ergo, most Linux developers are using desktops.
        And? Most developers, Linux or not, use desktops. Scalability comes from design, and validating the finished design by benchmarks on relevant systems. That doesn't require every developer to have refrigerator sized boxes sitting by their desks.

        Just like Windows, it was a desktop OS, trying to conquer serverOS too. Windows have not succeeded yet.
        Uh, what? Much as I prefer free software and Unix-like systems, claiming that Windows server is anything but a spectacular success makes one wonder which planet you live on. Here on planet earth, Windows is one of the most common server OS.

        What has this to do with Solaris updated graphic stack -> Solaris does not scale? Quite strange reasoning?
        It's an example of the same reasoning that you used to claim that Linux doesn't scale ("most Linux developers are not working on scalability"). Sounds ridiculous, doesn't it?

        Strange reasoning, again. GM does cars and would not easily be able to switch to drugs, they know nothing about it. Cars are very very different from drugs, right?
        No different from saying that companies in the HPC market should switch to the large DB server market because the margins are better, as you claimed.

        People here say Linux scales best in the world, it scales on HPC, it scales on SMP.
        Scaling is very workload dependent, it's just inane to claim that something scales without specifying the workload. For some workloads, such as HPC-style workloads on CC-NUMA machines, Linux scales very well indeed. Thanks in large part to SGI, whose customers rely on it.

        Well, Oracle thrives in that market and that market shows no sign of vulnerability.
        No sign of vulnerability? Why did Sun go down the drain then?

        Linux "chooses" to not go there.
        Once again, Linux doesn't "choose" anything or go anywhere. Linux is just software contributed to by various people and organizations with very different motivations. Can you specify some specific person or organization that you think should have a go at the high-end DB server market with Linux (assuming for the sake of argument that Linux is technically up to the task)?

        Linux is open. There are bound to be other companies that want to push Linux into traditional high lucrative big bucks markets to snatch the millions.
        Can you name some then? Companies that have the hardware, storage, consulting and other services, and whatnot, but not their own proprietary OS to protect?

        -No, I dont want to earn money.
        Yes, that's how company boards work when they evaluate business decisions.. If you truly believe that, perhaps you need to go back to school. Or perhaps a mental institution.

        Do you believe this yourself? Dont you know that IBM sold off their huge PC division, becuase the margins were to low?

        And HP's CEO Apotheker wanted to sell off the PC division recently, why? It had 4 billion in revenue, and the volume was huge, HP was the greatest PC seller. Answer: Apotheker said the margins were too low.

        High volume, low margin is not good business.
        You're (intentionally?) misunderstanding my point. I'm not saying that high volume, low margin is necessarily a good business model, what I said was that high volume, low margin entrants will take a market away from low volume, high margin incumbents. As you brought up the PC market, it offers an excellent example of this effect:

        - PC's become good enough (protected memory, preemptive multi-tasking OS'es such as Windows NT, Linux, GUI's, and so on) and destroy the Unix workstation market, forcing the incumbents to retreat from the market

        - x86 servers with Windows or Linux destroy the low-end Unix server market, forcing the incumbents to retreat into higher-end servers.

        - Intel and AMD keep releasing better and better x86-64 processors, forcing the incumbent Unix vendors into a smaller and smaller niche at the high end. Eventually the market is not going to be profitable enough for all of HP, IBM and Oracle.

        Vendor lock-in is not a factor, as long as you stay with the same vendor.
        I bow to your superior logic.

        We have two explanations why companies dont buy big Linux SMP servers. Which one is more reasonable?

        1) Linux can handle everything, both SMP and HPC. The reason Linux does not snatch the high end lucrative market worth billions, is because it is is not profitable, it is vulnerable. But Linux could take that market, if Linux companies wanted to. But they dont want that. They have better things to do, than to become millionaires. No Linux company does want to become the next IBM or Oracle or HP or Google or Apple. They dont want that. Why? (I dont know, ask the Linux supporters here).

        2) Linux and Windows dearly wants to go into the high end lucrative market, but Linux and Windows does not cut it. That is the reason no one makes big SMP Linux servers. Even if you recompile Linux on an existing big 64 cpu SMP server such as HP Unix Superdome, the biggest Linux configuration is for 8 cpus (16 cpus if you make a cluster of two 8-cpu nPars)

        Which explanation is more reasonable you think?
        I don't think any of those two are reasonable.

        Agreed, he might be biased. But, he also has said that Oracle database is most common on Solaris. Even before he bought Sun, Solaris was the prefered platform for the Oracle DB.
        I thought that preferred platform had changed to Linux at some point? Obviously after Oracle bought Sun and got Solaris among all the rest, they changed their marketing message in order to milk the product for every $ possible.

        If Linux is as good as you say, if Linux can replace big SMP Unix servers, why dont some Linux companies do it?
        Because none of the Linux companies are vertically integrated IT behemoths with hardware, storage, consulting and so on which is necessary to play at the very high end?

        Comment


        • #34
          Originally posted by kazetsukai View Post
          Sun wasn't too bad, its Oracle who has made it their hobby to get in the worlds' face after acquiring them. They hopped on a sinking ship and are going the way of the SCO. Friggin trolls.
          Oracle's little better in one case - they're using and supporting Linux. However, overall I don't like Oracle. They killed OpenSolaris and they're copying Red Hat. Some people are saying their current Solaris is nothing more than Oracle's DB launcher. SUN at least made some interesting Open Source projects.

          As for this guy who's going on and on about scalability:
          The guy doesn't believe SGI systems are SMP ones. Even if SGI says this and even if they're in "scale up" category.

          Comment


          • #35
            practical experience ??

            OMG is this guy real ? Does he have ANY practical experience ?

            I work with HP-UX and superdomes/rx8640 systems (google them). They are ccNUMA. Also Oracle/SAP workloads on those. They are 90% IO limited. There are a few scenarios where it takes some CPU power, but those are rare.

            Imagine a 16 socket (64 core) system with 256GB of memory and 8 FC ports to 4Gb SAN connected to the largest disk arrays you can imagine. Running Oracle/SAP full bore. The CPUs almost never cross over 30% load yet the system has performance problems because of I/O. NO OS would scale well on this kind of workload, since it the limits are external to the server (SAN storage).

            HPC is the other extreme. IO is limited to loading the working set and then the CPUs are fed purely from memory cached data until finished and results are flushed to storage.

            Linux storage subsystem needs work in the efficiency area. It has some hard problems under heavy IO load. Not to speak about device management and LVM (device mapper and multipathing are a tragic affair).

            Comment


            • #36
              Originally posted by kraftman View Post
              No, you don't.
              You're doing this. You compared system with double amount of RAM, faster database and that system was much more expensive, so the hardware was better overall. Why do you keep doing this every time?
              And you have done what I wrote before. Is this fair?
              Yes, I do compare hardware that is roughly equal, the Solaris hardware was not better. I have explained this many times to you. But I repeat again:

              -Solaris 48 core, 2.6GHz 6-core Opteron 8435, PC2-5300 RAM, Sun X4640, 256GB RAM
              -Linux 48 core, 2.8GHz 6-core Opteron 8539, PC2-6400 RAM, HP DL785G6, 128GB RAM

              The HP server can also use 256GB RAM. But in that case, HP would need to use slower PC-5300 DRAM sticks, which would lower the HP result even more. Thus, HP chose to use faster PC2-6400 DRAM sticks for a reason. Here is proof:

              When you look at page 15 of the quick specs pdf of the HP DL785G6 speed you will find the following note:
              When only PC2-6400 DIMMs modules are installed with a processor then memory bus speeds for 4 or fewer, 6 or 8 DIMMs per processor will operate at PC2-6400, PC2-5300 and PC2-4200 respectively. All other processor and memory configurations will operate at PC2-5300 with 4 or fewer DIMMs and PC2-4200 with
              The largest memory flavor with PC2-6400 is "8 GB REG PC2-6400 2 x 4 GB". The DL785 has 64 DIMM slots. To keep the memory bus at PC6400, you can just populate 32 of them. 32x4=128GB.

              These world record benchmarks are very important and they work for a long time to get them. For instance, Oracle worked at least one year to get the recent TPC-C world record. One year in advance, Larry Ellison said Oracle would present a "double digit TPC-C world record". One year later after he said that, Oracle presented the TPC-C world record, which Oracle is very proud of.

              Are you suggesting that HP benchmarketing team did not try different configurations, did not try different databases, did not try to recompile the Linux kernel with different optimizations? That HP specialist team is incompetent and did not work hard to try the world record? They just slapped on stock Linux, and took any database and then proclaimed the HP result after running the SAP benchmarks once?

              Thus, I compared a slightly slower Solaris server to a faster Linux server. I have listed both server spec many times to you, you know that Linux used faster CPU and RAM, and still even today, you say
              Originally posted by kraftman View Post
              the [Solaris] hardware was better overall. Why do you keep doing this every time? Is this fair?
              Question 5) Do you think that PC2-5300 DRAM is faster than PC2-6400 DRAM? Do you think that a 2.6GHz 6-core Opteron cpu model 8435 is faster than a 2.8GHz 6-core Opteron cpu model 8439? Is it unfair to compare PC2-5300 vs PC2-6400?





              Great, but we're talking about [Westmere-EX] 40cores compared to [Opteron] 48cores. Do you have something like that?
              You showed a new Linux SAP benchmark which is faster. And you compared this new Linux Westmere-EX benchmark to the old Solaris Opteron benchmark. Linux uses only 40-cores (4 cpus) and Solaris uses 48 cores (8 cpus), and still Linux wins. You said it is a fair benchmark.

              Well, it is not fair. SAP benchmarks shows that the Westmere-EX cpu is 3x faster than the old Opteron. If you have 40 Westmere-EX cores, then I need 3x more, thus I need 72 Opteron cores to match even in SAP.

              But the Solaris server has only 48 cores, and thus the Westmere-EX result is expected to be much higher than Solaris result. Thus, you are comparing faster Linux hardware to slower Solaris hardware. Just as you always did. Thus, it is not fair of you, to compare the latest Linux Westmere-EX server to an old Solaris Opteron server. And as we have seen, on the same number of cores, Solaris is faster than Linux.





              It is well known much more expensive hardware uses better components.
              Not always. If you look at my Armani clothes, do you think that the wool and cotton is 50x better than other wool and cotton? It is matter of branding. So you are not correct in this.

              When we talk about Enterprise market, what costs is not performance. Enterprise server manufacturer selects maybe the top 10% of the hardware components, in terms of RELIABILITY. There are lots of testing, and Oracle uses only the 10% more reliable components. That is the reason exactly the same memory costs much more - they are more tested and reliable. For instance, Apple RAM costs much more in an Apple store, but that does not mean that Apple PC-6400 is faster than other PC-6400 RAM sticks. But Apple is more tested and more reliable. It is the same component, though.

              I have already explained that the reason IBM can charge millions of USD for Mainframes, is because they are much more reliable, not because they are faster. A Mainframe has doubled or even tripled every component. Every calculation is done twice to see if something went wrong, etc. This is what costs. Not performance. Cpu wise, an Mainframe is slower than an x86. But x86 is very buggy, and there are sysadmins that refuse to use x86 cpus, they are too buggy they say.

              Enterprise thinks reliability is better than performance. Slow and safe, is better than fast and unstable, when we talk about Enterprise companies that handles billions of USD everyday in business. I have explained this many times. This is the same reason some sysadmins refuse to use ZFS, it is too young yet. It is only 10 years old, they say.





              Regarding my question 1), I ask why you say that Solaris is bloated, and you prove that Solaris is bloated like this:
              Its bloat is one of the reasons slowlaris looses in benchmarks. I explained you about Linux. When comes to slowlaris while its highly optimized binaries compared to standard Linux' ones are 30% slower then it suggests slowlaris is hugely bloated.
              Well, thank you for your opinion.

              Then you actually present a link! Marvelous that you do that.
              I don't FUD.
              OSDIR covers the entire spectrum of technology and brings its readers closer to the world of digital experience.

              OSDIR covers the entire spectrum of technology and brings its readers closer to the world of digital experience.


              Solaris developer says:
              "There are things in Solaris that are slower than other
              operating systems. We haven't spent a lot of time optimizing small
              process fork performance, and many of our very short cmds do
              more localization processing on startup than is actually necessary."

              So, even person from "Solaris Kernel Performance" agrees with me.
              No, this is not correct. The Solaris developer does not agree with you. He does not say "Solaris is bloated". He says "there are some things slower in Solaris than in other OSes". This to be expected. Solaris can not be faster in everything, it must be slower on some things. Solaris is faster in large scale with many cpus and disks, but probably smaller on single cpu, single disk, etc. This does not prove that Solaris is bloated, but it proves that Solaris is made for Enterprise. Linux is focusing on few cpus and few disks - desktop.

              Your links is about finding a bug in Solaris:
              Solaris kernel dev:
              "Getting a core dump via gcore(1) of the running process would really help us to diagnose and correct the problem."
              So, it is not talk about "Solaris is bloated". It is about a bug in Solaris. I bet there are bugs in Linux too, but a single bug does not prove that Linux is bloated. A single bug is not proof of Linux bloat.

              OTOH, Linus Torvalds himself has said that "Linux is bloated", do you want to see that link? Andrew Morton has said that Linux code quality is low, do you want to see that link? I hope you consider Linus Torvalds and Andrew to be credible? Or do you consider them as Troll and FUDer and liars?

              Do you have any link to a Solaris dev saying that Solaris is bloated, or of low quality or something similar credible link?






              Regarding my Question 2), why you always say that Oracle is killing Solaris. You prove you are correct by this:
              Indeed. They're going to kill the crap. btrfs will be used as default one in Oracle's Linux and btrfs can simply replace zfs. Why should they keep legacy slowlaris with 30% slower binaries when then can use faster Linux?
              Well this is not a link. Do you have any links at all, or did you make this up?





              Regarding my Question 3), you have many times said that Solaris is slow. And I have asked you to prove this, on similar hardware. It must not be identical hardware, but similar. Your answer:
              That proves you don't accept some arguments, but the same time you want others to accept yours which are similar. I don't care about slowlaris records, because Linux has many records.
              Yes of course Linux has many records, on big clusters, and when you compare 3x faster hardware to other OSes.

              But do you have any links on similar hardware? Any links at all?




              Regarding my Question 4), do you FUD? I said "You say again that Solaris is bloated. Again, prove this". Your answer:
              In short I'm saying truth about slowlaris.
              Well, if you speak true, then you can show links. So please post those credible links.





              You said many times that I FUD, but you didn't prove it. Some people have already proven you FUD.
              Here is proof that you FUD. Here is a guy, called Kraftman that says you FUDs, I hope you consider Kraftman to be credible, or you do you say that he is a FUDer, so you will not trust him, when he says you FUD?
              Discussion of *BSD operating systems and software, including but not limited to FreeBSD, DragonflyBSD, OpenBSD, and NetBSD. Mac OS X, GNU Hurd, and other alternative operating systems can also be discussed.

              I also was FUDing sometimes (according to wikipedia)
              And here is another proof. You know that Solaris scales up to at least 106 cpus, but still you refuses to admit that Solaris scales more than 64 cpus. If anyone asks you about Solaris, you would say "it only scales to 64 cpus" even though you know Solaris goes up to at least 106 cpus. This is clearly FUD, to lie and twist the truth.

              "Solaris can scale only up to 64 physical CPUs...I also don't care I could buy a 106CPUs server years ago"





              It was proven you FUD.
              Question 6) Where was it proven? I have never seen any such proof. I have asked you many times but never seen any proof that I FUD and lie. Where are my lies? Once again, can you quote my where I lie? Any links?

              I have credible links that you FUD, and now you can show credible links that I FUD.





              Thankfully we know SMP can be done on HPC systems. We also known the biggest SMP (NUMA actually) are running Linux. This proves Bonwick did FUD like one of the Linux devs has said.
              No, we dont know that SMP can be done on HPC systems. Why do you believe this? Any links? We know this:
              HPC => SMP (HPC can be done on SMP systems)
              This is not true:
              HPC <= SMP (SMP can not be done on HPC systems)

              Thus there is no equivalence, this is not true:
              HPC <=> SMP
              because in this case the market would not distinguish between SMP and HPC. The market would never talk about HPC or SMP, the market would only talk about big servers. Now the market makes a difference between HPC and SMP. So, this means there is a difference between SMP and HPC.

              The SGI Altix servers are refered to as HPC systems, I have showed links on this, I can show them again. Then a random guy came in here and claimed Altix is SMP, but he never showed any links. So, no, Altix is still HPC.





              Then I claimed there are no big SMP Linux servers for sale on the market, and you answered:
              http://www.zdnet.co.uk/news/emerging...sors-39184546/
              We know you're wrong. It was NUMA system and you learnt something new about this. Instead Big Tux it's enough to mention SGI to show how scalable Linux is.
              Another time, so much of text wasted.
              This is the same Big Tux HP Unix Superdome with 64 Itanium cpus, I talked about earlier. The same Linux server where an Linux nPar is at most 8 cpus. Thus, you can only install Linux virtual machines on this Big Tux server, at most 8 cpus. 16 cpus if you are clustering two 8-cpu Linux nPars. You can not use Linux with all 64 cpus on this server.

              Of course HP has tested to install Linux using all 64 cpus, but how good did a 64 cpu Linux work? Not good at all. Of 64 cpus, HP only got out 26 cpus, which means a low 40% cpu utilization. This is lower than 87% cpu utilization that Linux had in SAP. Solaris had 99% cpu utilization on SAP.
              Performance improved by a factor of 26 when all 64 Itanium 2 processors were used.
              Maybe a low 40% cpu utilization is the reason the HP dont sell 64 cpu Linux? This could be the reason the Big Tux Linux server only allows 8 cpus at most, because as more cpus are used, the cpu utilization drops off, down to 40%. You can not use Big Tux Linux with 64 cpus. 40% cpu utilization proves that Linux scales bad on this Big Tux SMP server. Thank you for helping me, and thank you for that link.

              Comment


              • #37
                Originally posted by jabl View Post
                But no, I don't think, and I've never claimed, that ext4 has some ?ber-good scalability. The SGI Altix systems, FWIW, use XFS AFAIK. XFS has been designed to perform well on the workloads SGI customers run.
                And, as we know, XFS is not safe. One guy did a XFS fsck in 20 minutes on his 16 TB raid, and this is bad. This means that XFS checked something like 20.000MB/Sec - which is not possible. Thus, XFS fsck does not check all data. Do no trust fsck.

                And other researchers have showed that XFS does not catch data corruption.





                And? Most developers, Linux or not, use desktops. Scalability comes from design, and validating the finished design by benchmarks on relevant systems. That doesn't require every developer to have refrigerator sized boxes sitting by their desks.
                The point is, Linux kernel devs does not have access to bigger SMP servers. That is the reason does not handle big SMP servers well.



                Uh, what? Much as I prefer free software and Unix-like systems, claiming that Windows server is anything but a spectacular success makes one wonder which planet you live on. Here on planet earth, Windows is one of the most common server OS.
                Yes you are correct that Windows is a common server OS. But I was talking about big servers. And there are no big Windows servers. You talk about desktop and small servers. I talk about big servers.




                It's an example of the same reasoning that you used to claim that Linux doesn't scale ("most Linux developers are not working on scalability"). Sounds ridiculous, doesn't it?
                No, it is not the same reasoning. Your logic is flawed.

                I am saying that Linux developers dont have access to big SMP servers, because there are no such servers on the market. I am not saying that Linux developers choose to not work on scalability - the reason is they cant. Not that they want.

                Ted Tso agrees with me on this. As we saw, he said most Linux devs did not have acess to 32 cores, which are exotic hardware. Thus, I am proving my point, and your example about graphic stack and scalability is totally irrelevant.





                No different from saying that companies in the HPC market should switch to the large DB server market because the margins are better, as you claimed.
                Do you think that it is strange of me, to say that companies should earn gazillions more money, by switching to a more lucrative market? You claim that Altix HPC servers can do SMP very easy, why dont SGI slap on stock linux and earn a few billions? Is this a strange question of me? No, it is not strange. It is strange that SGI does not do it.

                If there are gold on the road, why dont any body pick it up? Is it strange to ask that question?



                No sign of vulnerability? Why did Sun go down the drain then?
                The high end Enterprise high margin market is not vulnerable. The reason Sun went down the drain, is because Sun lost that market! If Sun still had those big contracts and earned a billions of USD, do you really think that Sun would go down the drain?




                Once again, Linux doesn't "choose" anything or go anywhere. Linux is just software contributed to by various people and organizations with very different motivations. Can you specify some specific person or organization that you think should have a go at the high-end DB server market with Linux (assuming for the sake of argument that Linux is technically up to the task)?
                Are you kidding? RedHat, SuSE, etc - they claim Linux is for big enterprise and wants to get the big bucks. RedHat advertises it's Linux as Enterprise, good for the most demanding tasks.

                But still, RedHat has not gone for the big bucks SMP market. Why?

                Can you name some then? Companies that have the hardware, storage, consulting and other services, and whatnot, but not their own proprietary OS to protect?
                See above.




                Yes, that's how company boards work when they evaluate business decisions.. If you truly believe that, perhaps you need to go back to school. Or perhaps a mental institution.
                Well, if you believe company boards would not want a piece of a highly lucrative billion dollar market, if you dont believe that companies dont want to become the next IBM or Oracle or Apple or Google or... then maybe it is you should go back to school?




                You're (intentionally?) misunderstanding my point. I'm not saying that high volume, low margin is necessarily a good business model, what I said was that high volume, low margin entrants will take a market away from low volume, high margin incumbents.
                To this I agree. I have never denied that Linux and Windows are eating from below.

                Maybe it is you that misunderstand me. I am claiming that if Windows could, windows would love to eat into high end too, and earn billions of dollars. Same with Linux. But this does not happen. Why not? Because "they dont want to"? Or is it because they can not, because of technical reasons?

                The reason Windows and Linux dont go after the high end lucrative market, is because they can not get any contracts, and therefore they dont earn money. No one dares to trust Windows on stock exchanges anymore. There is no point of Windows trying that business, anymore.




                I bow to your superior logic.
                I am trying to explain that if companies are afraid of vendor lock-in, they can migrate from Solaris Oracle database to Linux Oracle database. The same on IBM DB2 database. You can migrate from AIX DB2 to Linux DB2. It is quite easy to do, if you stay with the same database. Thus, there is no reason to be afraid of OS vendor lock-in. There is a Database vendor lock-in.

                But in this discussion, that is not relevant. Because we are discussing OSes, not databases. And you can freely switch between OSes, as long as you stay with the same database.




                I don't think any of those two are reasonable.
                So what explanation is reasonable you think? If Linux can do big SMP easily (as you claim) why dont Linux companies do it? What is your explanation?




                I thought that preferred platform had changed to Linux at some point? Obviously after Oracle bought Sun and got Solaris among all the rest, they changed their marketing message in order to milk the product for every $ possible.
                No, Oracle have not changed their message. Long time ago, Oracle said that Oracle Database is more deployed on Solaris, than any other OS. Also, Oracle said that Solaris is the premiere platform. I remember this very well, because Sun bragged a lot about this. Now Oracle just continues with the same message.




                Because none of the Linux companies are vertically integrated IT behemoths with hardware, storage, consulting and so on which is necessary to play at the very high end?
                This could be a possible explanation. But for instance, RedHat is not small. RedHat has 1000s of employees and much of the important software is open sourced. RedHat has the capacity to do this, but they dont. Why?




                Look, if Linux can do everything that mature Unix can do, why dont Linux do it? Just slap on Linux on SGI Altix server and get 50x more performance for a fraction of the price? But no. Why?

                Something does not add up? Something is fishy. Dont you see? Is is strange to ask this question?

                Comment


                • #38
                  Originally posted by haplo602 View Post
                  OMG is this guy real ? Does he have ANY practical experience ?

                  I work with HP-UX and superdomes/rx8640 systems (google them). They are ccNUMA. Also Oracle/SAP workloads on those. They are 90% IO limited. There are a few scenarios where it takes some CPU power, but those are rare.

                  Imagine a 16 socket (64 core) system with 256GB of memory and 8 FC ports to 4Gb SAN connected to the largest disk arrays you can imagine. Running Oracle/SAP full bore. The CPUs almost never cross over 30% load yet the system has performance problems because of I/O. NO OS would scale well on this kind of workload, since it the limits are external to the server (SAN storage).

                  HPC is the other extreme. IO is limited to loading the working set and then the CPUs are fed purely from memory cached data until finished and results are flushed to storage.

                  Linux storage subsystem needs work in the efficiency area. It has some hard problems under heavy IO load. Not to speak about device management and LVM (device mapper and multipathing are a tragic affair).
                  Ok, this interesting. You claim that Linux has bad I/O just as someone here, earlier claimed? Maybe that could be the reason?

                  Here is a storage expert that also says that Linux has bad I/O and does not cut it
                  I am frequently asked by potential customers with high I/O requirements if they can use Linux instead of AIX or Solaris.No one ever asks me about

                  For this, he was flamed. Of course.

                  Here is the follow up article, answering to the Linux flame:
                  My article three weeks ago on Linux file systems set off a firestorm unlike any other I've written in the decade I've been writing on storage and



                  Here is another link, he tries to see if Linux can handle big filesystems, but no Linux vendor wants him to do this benchamrk. Why? Do the vendors believe that Linux has bad I/O?
                  We failed. Jeff and I tried and tried to get the hardware to run the file system tests that we had written about. Originally, Jeff had all of the hardware



                  And that is the reason there are no big SMP Linux servers on the market? Because of bad I/O scaling? Does any one have more info on this?
                  Last edited by kebabbert; 20 November 2011, 12:21 PM.

                  Comment


                  • #39
                    Originally posted by kebabbert View Post
                    Vendor lock-in is not a factor, as long as you stay with the same vendor.
                    Awesome logic is awesome. You are my hero.

                    Comment


                    • #40
                      Originally posted by cldfzn View Post
                      Awesome logic is awesome. You are my hero.
                      Again, I admit I was writing something fuzzy and clarified what I actually meant. And, again, I meant this:
                      As long as you stay with the same database, you can switch OS almost freely.
                      Thus, OS lock-in is not important to this discussion. You can run Oracle DB on Solaris or on Linux. You can run IBM DB2 on AIX or on Linux. As long as you stay with the same database, the OS is not a factor.

                      Maybe you did not read my clarifying post?

                      Comment

                      Working...
                      X