Announcement

Collapse
No announcement yet.

Netbook Performance: Ubuntu vs. OpenSolaris

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by jollyd View Post
    As in every OS. Data corruption even occured with Linux and any other OS from time to time when a driver is broken. Issue is when you can't take another version and compile it against you kernel because from a minor to another interfaces had changed.
    The point is Linux is released every 3 months, so fixes come usually very quickly and patches are backported to previous releases (no matter if distros do this or kernel devs).

    Never got 2 wifi cards (Linksys to be exact) working under Debian GNU/Linux although they were advertized as working for the same major version.
    Maybe it was patched and working under distribution X but Y won't.
    Sure it always work 90% of the time but what is the potential issue for the 10%, in particular when you can't trace changes in interfaces without being an expert (which I'm not) ?
    I just wanted to say a single driver marked as stable can be broken on every OS.

    You can always ask at lkml. I have no knowledge about tracing etc. so I can't argue here.

    I think it's matter of granularity of the developement model. Truth stands in the middle.
    Yes, well said, but this one is even mentioned in faq However, it can be different in current Solaris releases.

    I don't write bullshit, just figure that I didn't expressed my thought correctly. I dislike when parallel port driver is broken.. then I upgrade and wifi stops working, then upgrade and USB freeze with my USB key (and so on)...
    I spend many hours on such problems. I prefer losing few microseconds on thread creation and avoiding spending 2 hours figuring which combination of kernel version /driver version is optimal. Moreover I had in mind that exporting statistics with kstat is nice.
    The same happens on others. It depends on distros, because they have last word which kernel should be used (if there are known regressions they should choose a good one). The another point is I never had such problems (as far as I remember I never had a single problem due to regression on Linux).
    As for swapping issues: with friends we were stunned... we tested on 5 kernel versions and compared with MacOSX, FreeBSD and Solaris. Debian couldn't stand the test and even ssh was unavailable. @ work on the cluster some nodes had the same issue and we had network connectivity issues that administrators couldn't diagnose. An update solved the network problem but not the swapping issue.
    I admit maybe it's a Debian specific problem.
    It depends what Debian release was tested. It won't be fair to compare current BSD or Solaris release against old Debian Stable.

    On my laptop and desktop under Debian I have problems with USB mass storage: couldn't handle more than on device without freezing.

    I gave few example of the drivers that did cause some issues under Linux and no problem under BSD and Solaris.
    Didn't meant to judge the quality of the code, but the fact that I fear the uncertainty of a driver working correctly at time $t$ and not at time $t + \delta t$
    I'm writing about my experience, which may be different from yours.
    Yeah, I already said I never had such problems, so those are different experiences.

    Seems to me that Linux and GNU use a superset of POSIX.
    It's not clear to me whether default behaviour is the correct one.
    I think linking /bin/sh to /bin/bash was not a lucky choice.
    It's probably a distro choice, but I don't know about details and I am happy with bash, so it's just a personal feeling.

    Agreed, don't take it as criticism (which isn't) but I express the fact that I'm satisfied with brandz.
    A positive for one, is not a negative for the other.

    Best regards,

    a.
    Thanks for good explanation. I should take your previous post like distro to distro comparison rather then Solaris to Linux. I also didn't clear many things in previous post, but it seems you already done this. Sorry for many, many spelling mistakes.

    Please, any idea for the next iteration ?
    You forgot Macs
    Last edited by kraftman; 18 July 2009, 04:41 AM.

    Comment


    • #22
      I know several people switched from Linux to OpenSolaris, mainly because of ZFS. With other solutions, your data is at risk because of silent corruption. The problem is when your hardware silently corrupts the bits, without telling you, without noticing it themselves. Then you need ECC functionality. ECC scans and corrects flipped bits in RAM. ZFS does the same thing on disk. And hardware raid doesnt help against problem of flipped bits, because HWraid wont even notice if a bit has been flipped. You need ECC to detect that. No filesystem utilizes ECC, except ZFS. The larger the discs, the more probable of some bits gets corrupted. Also, ZFS doesnt need fsck, and Linux fsck does only check the metadata, the data is not checked.


      One of the best textst on why to use ZFS, on future filesystems and the new problems hyper modern filesystems face. A new disc has 20% of the surface dedicated to error correcting codes, and still there are errors!



      CERN investigates silent corruption and presents very interesting conclusions:



      It is time to abandon RAID-5, because the discs are so big now that very often bits will be silently flipped. The larger the raid, the more flipped bits, eventually you will always see flipped bits. Only ZFS fixes this problem:



      ECC RAM

      "Electrical or magnetic interference inside a computer system can cause a single bit of DRAM to spontaneously flip to the opposite state."






      Other things I like with OpenSolaris is that it is more robust than Linux, it scales better, etc. Solaris is an OS for big computers and big loads. On the desktop it doesnt shine.

      For instance, you dont face code of varying quality in Solaris, as Linux kernel developer Andrew Morton says about Linux:

      Q:Is it your opinion that the quality of the kernel is in decline? Most developers seem to be pretty sanguine about the overall quality problem. Assuming there's a difference of opinion here, where do you think it comes from? How can we resolve it?

      A:I used to think it was in decline, and I think that I might think that it still is. I see so many regressions which we never fix."

      That is why see people switching from Linux to Solaris:





      Linux RAM overcommit it not a good strategy:
      Last week I learned something very interesting about the way Linux allocates and manages memory by default out of the box. In a way, Linux a...



      Linux doesnt scale well, when used as a file server:
      I am frequently asked by potential customers with high I/O requirements if they can use Linux instead of AIX or Solaris.No one ever asks me about



      And the lack of stable API/ABI is a bit of a pain. Linux is a moving target, your old drivers will not necessarily work. With Solaris is a different thing, the API and ABI has been frozen since way back. SUN guarantees binary compatibility back to Solaris v2.6, now Solaris is v5.10.

      Comment


      • #23
        Other things I like with OpenSolaris is that it is more robust than Linux, it scales better, etc. Solaris is an OS for big computers and big loads. On the desktop it doesnt shine.
        You mean File Systems right? That's why BTRFS is developed and in MySQL Solaris sucks a lot. When comes to strict e CPU scaling and big loads anything doesn't have a chance. Read about RCU or better about hierarchical RCU which is patented technology :> Linux is robust and probably that's why you see higher memory usage here:

        http://opsmonkey.blogspot.com/2007/0...vercommit.html *

        Life actually verified everything and there's no Sun's propaganda anymore

        For instance, you dont face code of varying quality in Solaris, as Linux kernel developer Andrew Morton says about Linux:
        Wrong Many Solaris features are/were just for advertisements (read about 'new' features which cause unstability and performance penalties - it's not following KISS like Linux; some of the reasons why commercial Unixes died). Its code is a big pile of legacy bull :>

        Linux RAM overcommit it not a good strategy:
        Why? Maybe thanks to this Linux creates threads much faster (*) and that's why it scales better then Solaris? I'm sure I read about this not so long ago and it was explained. I will try to find it. P.S. It's even mentioned in this link you provided - forking processes and threads are threaten like processes - creation is incredibly fast

        Linux doesnt scale well, when used as a file server:
        First paragraph.

        A:I used to think it was in decline, and I think that I might think that it still is. I see so many regressions which we never fix."
        Meaningless We don't know how it's in Solaris and what/where are those regressions? Old drivers etc.? And situation changes

        And the lack of stable API/ABI is a bit of a pain. Linux is a moving target, your old drivers will not necessarily work. With Solaris is a different thing, the API and ABI has been frozen since way back. SUN guarantees binary compatibility back to Solaris v2.6, now Solaris is v5.10.
        I think there's nothing to argue Linux "guaranties" compatibility with binaries from...'91 :>

        Linus talked about how nice it is to know that we can still run binaries from 1991
        http://lwn.net/Articles/298510/
        Last edited by kraftman; 20 July 2009, 04:44 PM.

        Comment


        • #24
          Originally posted by kraftman View Post
          You mean File Systems right? That's why BTRFS is developed and in MySQL Solaris sucks a lot. When comes to strict e CPU scaling and big loads anything doesn't have a chance. Read about RCU or better about hierarchical RCU which is patented technology :> Linux is robust and probably that's why you see higher memory usage here:

          http://opsmonkey.blogspot.com/2007/0...vercommit.html *

          Life actually verified everything and there's no Suns propaganda anymore

          Wrong Many Solaris features are/were just for advertisements (read about 'new' features which cause unstability and performance penalties - it's not following KISS like Linux). Its code is a big pile of legacy bull :>

          Why? Maybe thanks to this Linux creates threads much faster (*) and that's why it scales better then Solaris? I'm sure I read about this not so long ago and it was explained. I will try to find it. P.S. It's even mentioned it this link you provided - forking processes and threads are threaten like processes - creation is incredibly fast

          First paragraph.

          Meaningless We don't know how it's in Solaris and what/where are those regressions? Old drivers etc.? And situation changes

          I think there's nothing to argue Linux "guaranties" compatibility with binaries from...'91 :>

          http://lwn.net/Articles/298510/
          I dont really agree with you. You should read the comments in your last link at the bottom. People complain that Linux has bad compatibility, they dont agree with Linus saying '91. And there are also lots of articles talking about Linux' problem with not defined and unstable ABI. I understand that you believe that Linux has defined and frozen ABIs, but there are people not agreeing with you. Just google a bit on unstable abi Linux. For instance
          Now, next, and beyond: Tracking need-to-know trends at the intersection of business and technology


          "the incompatibility between different stable point versions of the kernel hampers the Driver on Demand concept. You could compile a driver for 2.6.5 and it would probably not work on 2.6.10 if you simply loaded the precompiled binary module; you would need to recompile the driver for each kernel version."

          We talk about unstable ABI here. Maybe Linux have "stable" APIs, I dont know that. But if Linux have stable APIs, then they allow Linux to run old programs. But device drivers require stable ABI. The prefered state is both stable APIs and ABIs.





          Regarding Sun propaganda about bad Linux scaling, maybe you have read this link? Linux scaling experts talk about Linux scaling bad is only FUD from evil Unix vendors. Linux scales very well, they say.

          They talk about Linux scaling to 10.000 intel processors and yadda yadda. But 10.000 intel CPUs is just a cluster, a bunch of PCs. That is called horizontal scaling and you add a PC to the cluster and Linux scales (with a stripped down tailored kernel). For Big Iron, one machine with several CPUs, Linux scales bad - this is vertical scaling. The experts admit that in Linux v2.6 Linux will scale vertically up to... 16CPUs. Whereas Solaris scales vertically to hundreds of CPUs today on Big Iron. You know, there is no Big Iron with 10.000 cpus. When you talk about that many CPUs, you always talk about clusters. Which the scaling experts admits:
          "With the 2.6 kernel, the vertical scaling will improve to 16-way. However, the true Linux value is horizontal scaling."

          Just recently, in Linux v2.6.27 the kernel is no longer 250 times slower on some tasks on 64CPU machines. Earlier, the kernel was 250 times slower. I dont know how much problems there are still, that havent been fixed.
          Summary of the changes and new features merged in the Linux Kernel during the 2.6.27 development

          "page fault speedup of 250x on a 64 way system have been measured"

          You know it takes decades to scale well. Linux has just recently started with that. Solaris has been doing that for decades.




          Regarding my link where Andrew Morton complains about bad quality, and you just refuse to accept that link as "Meaningless" where Andrew talks about people need to submit better code and test it. Well, maybe you know better than Andrew Morton, maybe he is wrong. And maybe you know better than these Linux kernel developers too:
          Thirty years ago, Linus Torvalds was a 21 year old student at the University of Helsinki when he first released the Linux Kernel. His announcement started, “I’m doing a (free) operating system (just a hobby, won't be big and professional…)”. Three decades later, the top 500 supercomputers are all running Linux, as are over 70% of all smartphones. Linux is clearly both big and professional.


          "the source tree breaks every day, and it's becoming an extremely non-fun environment to work in. We need to slow down the merging, we need to review things more, we need people to test their [...] changes!"

          Regarding you dont know whats in Solaris code, well why dont you take a look? It is OpenSource, and available online. The guru Kernighan and Richie and those, had studied the Linux code and said it was not that good, in fact.



          Then you write:
          "Many Solaris features are/were just for advertisements (read about 'new' features which cause unstability and performance penalties - it's not following KISS like Linux). Its code is a big pile of legacy bull" So the Solaris code is apparently bad, you say. At the same time you write:
          "We don't know how [the code is] it's in Solaris"
          You know the code is bad, but you dont know how the code is? It seems that I have misunderstood you.



          About Linux creating fast threads, that is not the same thing as scaling well. There are several Unix companies switching to Linux, and then they have to switch back to Solaris, because Linux doesnt cut it under high loads. For instance this article:
          My article three weeks ago on Linux file systems set off a firestorm unlike any other I've written in the decade I've been writing on storage and





          Regarding my link about Linux RAM overcommit, and you state it is a good thing, because it allows Linux to create threads quickly. You are surely joking. No sane person would prefer to have random processes killed on a Linux server, when memory runs out. What happens if a Linux process is very important and has run for a few weeks with a calculation? On Solaris random processes are not killed as on Linux, instead Solaris refuses to allocate more memory. All Solaris processes are allowed to finish. Linux over allocates more memory than available and when memory is finally needed, random Linux processes are killed. That is hardly optimal. You are joking.




          Regarding
          "Many Solaris features are/were just for advertisements (read about 'new' features which cause unstability and performance penalties - it's not following KISS like Linux)"

          You sure know that ZFS has been called "rampant layering violation" by Linux devs because ZFS simplified the file system architecture? And you do have heard about ZFS advantages? And DTrace advantages? And Zones? And SMF? UltraSparc Niagara etc etc? For instance, DTrace:


          Bootnote
          -----------
          "Using DTrace, I instrumented every single assembly instruction in the function. What we found is that 5492 times to 1, there was a short circuit code path that was taken. We created a version of the function that had the short circuit case and then called the "real" function for other cases. This was completely inlinable and resulted in a 47 per cent performance gain.

          Certainly, one could argue that if you used a debugger or analyzer you may have been able to come to the same conclusion in time. But who would want to sit and step through a function instruction by inctruction 5493 times? With DTrace, this took literally a ten second DTrace invocation, 2 minutes to craft the test case function, and 3 minutes to test. So in slightly over 5 minutes we had a 47 percent increase in performance."

          Or PHP + DTrace. If you are a developer you MUST read this.


          Do you really call these technologies "just advertisements" and "new features which cause unstability"??





          But anyway. I understand that you believe Linux has stable ABIs and all my links where companies tell that Linux becomes unstable during high loads - you dont accept. And also you dont accept the links where Linux kernel devs discuss the declining quality. And I understand that when I post such links, you think they are only SUN propaganda. Those links dont exist, or are made by SUN. It is ok if you believe all this stuff is SUN propaganda. Let us stop there.

          Comment


          • #25
            Originally posted by kebabbert View Post
            I dont really agree with you. You should read the comments in your last link at the bottom. People complain that Linux has bad compatibility, they dont agree with Linus saying '91. And there are also lots of articles talking about Linux' problem with not defined and unstable ABI. I understand that you believe that Linux has defined and frozen ABIs, but there are people not agreeing with you. Just google a bit on unstable abi Linux. For instance
            Now, next, and beyond: Tracking need-to-know trends at the intersection of business and technology


            "the incompatibility between different stable point versions of the kernel hampers the Driver on Demand concept. You could compile a driver for 2.6.5 and it would probably not work on 2.6.10 if you simply loaded the precompiled binary module; you would need to recompile the driver for each kernel version."

            We talk about unstable ABI here. Maybe Linux have "stable" APIs, I dont know that. But if Linux have stable APIs, then they allow Linux to run old programs. But device drivers require stable ABI. The prefered state is both stable APIs and ABIs.
            That's why I quoted - guarantees. Linux doesn't have stable API/ABI. I think, stable means old and crappy in this case . However, you can still run such old binaries, but don't ask me how, because I'm not interested in doing this.



            Regarding Sun propaganda about bad Linux scaling, maybe you have read this link? Linux scaling experts talk about Linux scaling bad is only FUD from evil Unix vendors. Linux scales very well, they say.

            They talk about Linux scaling to 10.000 intel processors and yadda yadda. But 10.000 intel CPUs is just a cluster, a bunch of PCs. That is called horizontal scaling and you add a PC to the cluster and Linux scales (with a stripped down tailored kernel). For Big Iron, one machine with several CPUs, Linux scales bad - this is vertical scaling. The experts admit that in Linux v2.6 Linux will scale vertically up to... 16CPUs. Whereas Solaris scales vertically to hundreds of CPUs today on Big Iron. You know, there is no Big Iron with 10.000 cpus. When you talk about that many CPUs, you always talk about clusters. Which the scaling experts admits:
            "With the 2.6 kernel, the vertical scaling will improve to 16-way. However, the true Linux value is horizontal scaling."
            When comes to RCU it's good when comes to clusters, but there are some other things which affects scalability like my favorite threads creation. I like when someone kills its own arguments - in this link it's about first 2.6 (so 2.6.0?) kernel which allows to scale up to 16 CPUs and this kernel is very old, don't you think? Some older kernels weren't even preemptible, but everything changed.

            Just recently, in Linux v2.6.27 the kernel is no longer 250 times slower on some tasks on 64CPU machines. Earlier, the kernel was 250 times slower. I dont know how much problems there are still, that havent been fixed.
            Summary of the changes and new features merged in the Linux Kernel during the 2.6.27 development

            "page fault speedup of 250x on a 64 way system have been measured"
            Page fault is 'faster' in this LINUX kernel 250x (maybe thanks to RCU :>) then on previous Linux's one. However, it just improved scalability and didn't make it 250 times better. Little difference

            You know it takes decades to scale well. Linux has just recently started with that. Solaris has been doing that for decades.
            Nope, Linux started with this quite long ago and they base on other systems as I mentioned before.


            Regarding my link where Andrew Morton complains about bad quality, and you just refuse to accept that link as "Meaningless" where Andrew talks about people need to submit better code and test it. Well, maybe you know better than Andrew Morton, maybe he is wrong. And maybe you know better than these Linux kernel developers too:
            http://kerneltrap.org/Linux/Active_Merge_Windows
            We still don't know how it looks in others. Quality is something much more important for Linux devs then for Solaris guys (that's why they did some stupid advertisements), so it's meaningless and out of the context.

            "the source tree breaks every day, and it's becoming an extremely non-fun environment to work in. We need to slow down the merging, we need to review things more, we need people to test their [...] changes!"
            Isn't this about rc's? And I think it's no longer actual. Linus is usually satisfied

            Regarding you dont know whats in Solaris code, well why dont you take a look? It is OpenSource, and available online. The guru Kernighan and Richie and those, had studied the Linux code and said it was not that good, in fact.
            I'm not a dev and I'm not interested in such opinions Sun gave me some reasons to not believe in such things. 'Guru's are corruptible :> Like this one from GPL camp who betrayed it ;p


            Then you write:
            "Many Solaris features are/were just for advertisements (read about 'new' features which cause unstability and performance penalties - it's not following KISS like Linux). Its code is a big pile of legacy bull" So the Solaris code is apparently bad, you say. At the same time you write:
            "We don't know how [the code is] it's in Solaris"
            You know the code is bad, but you dont know how the code is? It seems that I have misunderstood you.
            Yes, I think Solaris isn't so good. We don't know how [the regressions are] it's in Solaris.

            About Linux creating fast threads, that is not the same thing as scaling well. There are several Unix companies switching to Linux, and then they have to switch back to Solaris, because Linux doesnt cut it under high loads. For instance this article:
            http://www.enterprisestorageforum.co...le.php/3749926
            It's about file systems. I completely agree ZFS is better, but if you say scalling here think about FS not about scheduler or locking mechanism. However, we don't know if this guy is right and he may be unaware of gnu's malloc



            Regarding my link about Linux RAM overcommit, and you state it is a good thing, because it allows Linux to create threads quickly. You are surely joking. No sane person would prefer to have random processes killed on a Linux server, when memory runs out. What happens if a Linux process is very important and has run for a few weeks with a calculation? On Solaris random processes are not killed as on Linux, instead Solaris refuses to allocate more memory. All Solaris processes are allowed to finish. Linux over allocates more memory than available and when memory is finally needed, random Linux processes are killed. That is hardly optimal. You are joking.
            I'm really not sure about this one, so when I find link I mentioned before I will paste it here. It looks like gnu malloc problem :> They usually make crap. Some exploits are gcc related and this crappy malloc... GNU/Linux? You've got to be kidding xd

            Regarding
            "Many Solaris features are/were just for advertisements (read about 'new' features which cause unstability and performance penalties - it's not following KISS like Linux)"

            You sure know that ZFS has been called "rampant layering violation" by Linux devs because ZFS simplified the file system architecture? And you do have heard about ZFS advantages? And DTrace advantages? And Zones? And SMF? UltraSparc Niagara etc etc? For instance, DTrace:
            http://www.theregister.co.uk/2004/07...ace_user_take/
            Yeah, ZFS, but when comes to KISS there are other things. I mentioned before about threads creation which implementation is (or was, because maybe Solaris guys changed it) much simpler on Linux. Zones, as far as I know Linux's Xen provides zones, but I'm not interested in such features, dtrace is another nice thing, about those CPU's people say Linux runs better :> SMF it's hard to name some features the same, so there's maybe something like this under different name or it's not exciting.

            Do you really call these technologies "just advertisements" and "new features which cause unstability"??
            I thought mainly about other 'features' like those related to virtualization and others.

            But anyway. I understand that you believe Linux has stable ABIs and all my links where companies tell that Linux becomes unstable during high loads - you dont accept. And also you dont accept the links where Linux kernel devs discuss the declining quality. And I understand that when I post such links, you think they are only SUN propaganda. Those links dont exist, or are made by SUN. It is ok if you believe all this stuff is SUN propaganda. Let us stop there.
            I said before Linux doesn't have stable ABI (as far as I know ). There were many Sun's friendly companies, so it's propaganda for me I wrote what I thing about scaling. We can give much more 'proofs' etc. but it will usually reduce to believe or not OS'es are too complicated to gave good picture about them. I also said before stable API/ABI can be crappy etc. but it's probably not a rule. You don't have to agree with anything I wrote, so I believe we can stop here.
            Last edited by kraftman; 22 July 2009, 03:20 PM.

            Comment


            • #26
              wow noce inf thx

              Comment


              • #27
                Solaris runs forever

                With Solaris you can set up a machine and it will run for years with no manual intervention.

                Every Linux distribution I've ever seen comes with one daemon or another that will run out of control and chew up all the RAM or CPU after a few months of uptime.

                I have seen big CPU loads render a Linux machine completely unusable. It takes ages to get a prompt, if ever, and even more ages before it starts to act on what you type. I have had Solaris boxes loaded down in a similar manner, but I was able to get a responsive prompt and kill the offending process.

                I sure would not want Solaris on my desktop, though. I like to have recent versions of desktop software, and most stuff just will not compile on Solaris without tweaking.

                Comment


                • #28
                  Originally posted by frantaylor View Post
                  With Solaris you can set up a machine and it will run for years with no manual intervention.

                  Every Linux distribution I've ever seen comes with one daemon or another that will run out of control and chew up all the RAM or CPU after a few months of uptime.

                  I have seen big CPU loads render a Linux machine completely unusable. It takes ages to get a prompt, if ever, and even more ages before it starts to act on what you type. I have had Solaris boxes loaded down in a similar manner, but I was able to get a responsive prompt and kill the offending process.
                  Bullshit :> There's a Linux machine which runs more then 12 years (I hope it still runs, because I checked some time ago, year or two maybe). There are many more which runs for years. About this unresponsiveness it maybe known bug I/O related (and not sure if not hardware related, because only some configurations are affected; it may also be your messed config). If it takes more then 120s. I can say it's opposite and give no proofs, but why play in such childish games? The facts are Linux replaced Solaris in many, many environments. Solaris lives, because of ZFS.

                  Solaris runs forever? XD
                  Last edited by kraftman; 30 July 2009, 07:24 AM.

                  Comment


                  • #29
                    Kraftman,
                    You know, the problem is not to run for 12 years. MS-DOS machines could also run for 12 years. The problem is to run under high load with multiple users and programs running for long periods. That is the problem. Any OS could run for 20 years, just dont run any programs on it. Under high load, Linux crumbles and gets unstable. Whereas Solaris does not.



                    As Ive said, Solaris has been doing this stuff for decades, whereas Linux has not. The first version of Solaris 30 years ago, was called SunOS. It was not that good, had not good code, did only scale well to 8-16 cpus, just like Linux today. Then SUN scrapped SunOS and did it anew, i.e. Solaris. Solaris is version 2.0 and it takes decades to scale well. To scale well is nothing you do in a few years. It takes decades. Look at Windows, MS has tried to make Windows scale well for 20 years. Still it doesnt succeed. Linux scaled to 4-8 CPUs recently in v2.4. Now it is v2.6.30 or so. Iit is impossible to go from bad scaling to good scaling, in a few years. You have rewrite all lock mechanisms etc. Solaris scales now to several hundreds of CPUs in one big machine. Linux scales to thousands nodes in a cluster, but not on one big machine. As I showed, Linux suffered when trying to scale to 64 cpus recently on 2.6.27 where it was 250 times slower on a certain thing. I bet there are other things which hasnt been fixed yet. Seriously, it takes decades to scale well. It isnt nothing you do on a coffee break. Maybe you should learn some programming?



                    And about your weird remark:
                    "Yes, I think Solaris isn't so good. We don't know how [the regressions are] it's in Solaris."

                    How can you say that Solaris is not good, when you now nothing about the code? You say, we dont know which regressions are there. Maybe there are no regressions at all (not likely)? But still you say things. It is like, "I dont like that restaurant's food, but I dont know how the food is"



                    And your other statement:
                    "Quality is something much more important for Linux devs then for Solaris guys"

                    That is weird. You know that Solaris is restrictive with accepting code from anyone. Whereas Linux accepts code from anyone. And Ive posted links showing that the Linux kernel devs complain about the bad code, the source tree breaks, etc. And how do you react to those links proving I am right, and you are wrong??? I have given you proof. Proofs not from SUN, but from Linux devs.
                    Last edited by kebabbert; 30 July 2009, 07:53 AM.

                    Comment


                    • #30
                      Originally posted by kebabbert View Post
                      Kraftman,
                      You know, the problem is not to run for 12 years. MS-DOS machines could also run for 12 years. The problem is to run under high load with multiple users and programs running for long periods. That is the problem. Any OS could run for 20 years, just dont run any programs on it. Under high load, Linux crumbles and gets unstable. Whereas Solaris does not.
                      I know what you mean and I can realize this. However, I think opposite: under high load Solaris crumbles and gets unstable. Whereas Linux does not. :>

                      As Ive said, Solaris has been doing this stuff for decades, whereas Linux has not. The first version of Solaris 30 years ago, was called SunOS. It was not that good, had not good code, did only scale well to 8-16 cpus, just like Linux today.
                      Todays Linux kernel is 2.6.30 not 2.6.0 like in your "proff" man

                      Then SUN scrapped SunOS and did it anew, i.e. Solaris. Solaris is version 2.0 and it takes decades to scale well. To scale well is nothing you do in a few years. It takes decades. Look at Windows, MS has tried to make Windows scale well for 20 years. Still it doesnt succeed. Linux scaled to 4-8 CPUs recently in v2.4. Now it is v2.6.30 or so. Iit is impossible to go from bad scaling to good scaling, in a few years. You have rewrite all lock mechanisms etc. Solaris scales now to several hundreds of CPUs in one big machine. Linux scales to thousands nodes in a cluster, but not on one big machine. As I showed, Linux suffered when trying to scale to 64 cpus recently on 2.6.27 where it was 250 times slower on a certain thing. I bet there are other things which hasnt been fixed yet. Seriously, it takes decades to scale well. It isnt nothing you do on a coffee break. Maybe you should learn some programming?
                      Proof this is impossible. You still don't understand what I already wrote about scalability (250 times faster, but what? :>.). If Linux is much younger then Solaris why it scales far better on clusters while Solaris not (why not to use the same argument when comes to one big machine?)? You're still killing your own arguments. It took decades and Linux isn't reinventing a wheel. Linux takes what's the best from others and adds some things on it's own (Linus said this, didn't he?).

                      And about your weird remark:
                      "Yes, I think Solaris isn't so good. We don't know how [the regressions are] it's in Solaris."

                      How can you say that Solaris is not good, when you now nothing about the code? You say, we dont know which regressions are there. Maybe there are no regressions at all (not likely)? But still you say things.
                      I don't need to know the code to say this. In opposite how can you say that Solaris is good, when you know nothing about the code? And maybe there are many regressions? (don't know).

                      It is like, "I dont like that restaurant's food, but I dont know how the food is"
                      Nope, it's like: I don't like that restaurant's food, but I don't know how the kitchen is.

                      And your other statement:
                      "Quality is something much more important for Linux devs then for Solaris guys"

                      That is weird. You know that Solaris is restrictive with accepting code from anyone.
                      I don't know this (but if you mean accepting code from some users...). They already lost, so it can't have such quality like Linux.

                      Whereas Linux accepts code from anyone. And Ive posted links showing that the Linux kernel devs complain about the bad code, the source tree breaks, etc. And how do you react to those links proving I am right, and you are wrong??? I have given you proof. Proofs not from SUN, but from Linux devs.
                      Not true. Xen isn't merged (KISS philosophy? at least in current Xens form...), Ksplice isn't merged, grsecurity isn't merged (however, I'm not sure if they wanted this, but as far as I remember...). I'm sure there are other examples. I said what I think about those links (or rather those guys ).

                      P.S. I take everything about scalability with grace of salt, because it's possible you're scenario is real, mine or we're both wrong.
                      Last edited by kraftman; 31 July 2009, 04:35 AM.

                      Comment

                      Working...
                      X