Announcement

Collapse
No announcement yet.

Oracle Has Yet To Clarify Solaris 11 Kernel Source

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by kraftman View Post
    You are terribly mistaken as usual. Slowlaris scales completely bad compared to Linux. Linux had advanced scaling techniques first. Commercial systems copied some of them later. Linux scales crap out of slowlaris and nearly everyone knows this - that's one of the reasons why Oracle is abandoning slowlaris. Wow, slowlaris will see only 16.384 CPUs. It's very small number compared to Linux - RHEL can see 64.000. It seems slowlaris is even more legacy than I thought. When comes to RCU Linux implementation is innovative and things like DTrace and ZFS aren't. ZFS is just one file system among many and DTrace is just one tool among many. Are you aware how old is the idea of file system? Do you know when the first file system was created? I can provide you a list of Linux techs that everybody wants, but you have to provide list of innovative slowlaris techs first. btrfs is completely different file system than zfs, so no, it's not a zfs wannabe.
    As I have asked you many times, when you post your claims, it would be good if you could back them up. Otherwise people will just think you are posting Propaganda and FUD. Dont you agree?


    Now you have again claimed that
    1) Oracle is abandoning Solaris. Where is this link? You have claimed this many times.


    2) "Linux scales crap out of slowlaris and nearly everyone knows this" - link please. Or is it just FUD from you? Sure, maybe RHEL can see 64.000 cpus, but that is just a big cluster as I have proved to you. It is not an SMP server, it is just a big cluster. There are no big SMP Linux servers. What is the biggest Linux SMP server today? Is it still 8 cpus? The Big Tux server you posted, is not sold, and as you proved Linux scaled very bad on 64 cpus, that is the reason no one sells Big Tux servers - because Linux has problems scaling on as few as 64 cpus.


    3) RCU from IBM, is innovative you claim. Well, that is good. But as we can see, you dont need RCU to scale better than Linux. Solaris has no RCU, and still Solaris scales better than Linux. So how can RCU be innovative then?


    4) "I can provide you a list of Linux techs that everybody wants, but you have to provide list of innovative slowlaris techs first."
    I have already showed a list of innovative Solaris tech. DTrace, ZFS, Zones, Crossbow, etc. All those have Linux copies. For instance the DTrace is called Systemtap. The ZFS copy is called BTRFS. The Zones copy is called....The Crossbow copy is called Open vSwitch.

    Now it is your turn, you go ahead and show us a list of innovative and unique Linux tech that everybody has copied or ported.

    Comment


    • #22
      Originally posted by russofris View Post
      There is little doubt in my mind that the linux kernel is currently the most advanced and mature kernel in existence. When compared to Solaris, linux features better hardware compatibility, scaling, memory management, scheduling, and stability on commodity enterprise hardware. It's growing fast, and will likely become the standard kernel for the remainder of this age of mankind.F
      I dont agree with you, as there are numerous links that does not support your claims. Let us study your claims:



      1) "Linux features better hardware compatiblity"
      This is true and you are right on this.



      2) "scaling"
      Here you need to be careful. There are two kinds of scaling.
      -Horizontal scaling. It is scaling on a cluster, i.e. a network with a bunch of PCs. For instance, Google uses Horizontal scaling, Google use Linux on 900.000(?) cpus. Only way of having 900.000 cpus, is if you have a network. These are sometimes called HPC servers. They are often used for numerical calculations, where you have many PCs on a network. The Top500 list, all consists of HPC servers (a large network). These problems are called "embarasingly parallell" and the more PCs you add to the network, the faster your calculations will be because these numerical problems are easy to parallellize. HPC servers.

      -Vertical scaling. It is scaling on a big server, i.e. a big machine. They are called SMP servers, and are not a cluster. They are a single big machine. They can have as many as 64 cpus. Sun had a SMP server some years ago that had 144 cpus! IBM has a big Unix 32 cpu SMP server today, called IBM P795 and it is shitload expensive. HP's biggest Unix server today, has a 32 cpu server today, called Superdome 2

      Both IBM and HP has had 64 cpu servers. They where extremely expensive.

      So when people say that Linux scales well, they are always refering to Horizontal scaling. HPC servers. And this is true, Linux scales well on a large network. But if we look at Vertical scaling, a SMP server, Linux scales very bad. I think the largest Linux SMP server today has 8 cpus? Or are there any bigger Linux servers yet?

      But on the other hand, the old Unixes has scaled to 64 cpus for many years. Linux is still on 8(?) cpus today. Thus, Linux scales extremely bad on SMP servers.



      3) "memory management"
      Is extremely bad on Linux. Linux allows every process to get as much RAM as they want. This means RAM can be over provisioned, thus processes can allocate more RAM than the server has. When processes start to use all the RAM they asked for, RAM will be tight. Then Linux starts to kill processes on random. This decreases stability on Linux. Important processes may be killed:
      Last week I learned something very interesting about the way Linux allocates and manages memory by default out of the box. In a way, Linux a...

      On Solaris, RAM can not be over allocated. Processes can not use more RAM than available. Thus, processes will not be killed.



      4) "scheduling"
      Solaris has had O(1) scheduler for decades. You can change scheduler on the fly. You dont need to reboot the system.
      Also, there is still argument if Linux patch by Con Kolivas is better than normal Linux scheduler, they have problems finding out. Thus, Linux lags behind here too.



      5) "stability"
      This is really strange claim. Enterprise Unix have always had much better RAS than Linux. The reliability of Unix is much higher than Linux. On my big finance company where I work, the sysadmins don't see Linux as stable. Mainframes are most stable, and OpenVMS they say. Then comes Enterprise Unix in terms of stability. Last comes Windows and Linux.

      There are many claims of sysadmins on the internet, complaining on unstable Linux. Sure, Linux might be stable at your home, but it is a different thing when you run a big server with lot of load, on distributed systems. It is like "NTFS is good for my 1TB disk, surely it must be good for 1PB systems?". There are scaling problems when having big load. If you have light load, then every OS is stable, including Windows.

      Comment


      • #23
        Originally posted by kebabbert View Post
        I dont agree with you, as there are numerous links that does not support your claims. Let us study your claims:

        2) "scaling"
        Here you need to be careful. There are two kinds of scaling.
        -Horizontal scaling. It is scaling on a cluster, i.e. a network with a bunch of PCs. For instance, Google uses Horizontal scaling, Google use Linux on 900.000(?) cpus. Only way of having 900.000 cpus, is if you have a network. These are sometimes called HPC servers. They are often used for numerical calculations, where you have many PCs on a network. The Top500 list, all consists of HPC servers (a large network). These problems are called "embarasingly parallell" and the more PCs you add to the network, the faster your calculations will be because these numerical problems are easy to parallellize. HPC servers.

        -Vertical scaling. It is scaling on a big server, i.e. a big machine. They are called SMP servers, and are not a cluster. They are a single big machine. They can have as many as 64 cpus. Sun had a SMP server some years ago that had 144 cpus! IBM has a big Unix 32 cpu SMP server today, called IBM P795 and it is shitload expensive. HP's biggest Unix server today, has a 32 cpu server today, called Superdome 2

        Both IBM and HP has had 64 cpu servers. They where extremely expensive.

        So when people say that Linux scales well, they are always refering to Horizontal scaling. HPC servers. And this is true, Linux scales well on a large network. But if we look at Vertical scaling, a SMP server, Linux scales very bad. I think the largest Linux SMP server today has 8 cpus? Or are there any bigger Linux servers yet?

        But on the other hand, the old Unixes has scaled to 64 cpus for many years. Linux is still on 8(?) cpus today. Thus, Linux scales extremely bad on SMP servers.
        This nonsense again? Do you honestly believe you're being an effective advocate for Solaris by being deliberately obtuse? I gave you a primer on the various different multiprocessor terminology, the history of shared memory system evolution, and which companies were drivers in this area several months ago. I got the impression that I initially had at least partially gotten through to you, but I was clearly mistaken - you are either unable to take in factual information if it contradicts the misinformation in your head that you believe support your bias, OR you are pretending not to have learned because you genuinely believe continuing to reiterate your misinformation is actually an effective way to do Solaris advocacy. Neither option puts you in a very good light.

        I'm not going to go through the whole NUMA vs. "true" SMP vs. distributed memory clusters again - I've already done that, and you're welcome to go back to re-read it if you need a reminder. I will, however, point out that you are just making an absolute fool of yourself with statements like "I think the largest Linux SMP server today has 8 cpus? Or are there any bigger Linux servers yet?". For Moore's sake, you get more cores than that on a SINGLE modern Xeon E7 chip. Regardless of the fact that Westmere-EX servers are ccNUMA (like EVERY modern "big" multicore server, inlcuding the Oracle M9000 in dual-cabinet configurations), "mainstream" server CPUs today rarely come with _less_ than 8 cores in a single package. What the hell do you think large companies are running Linux on these days?

        As I've already explained, "real" SMP vs. NUMA is pretty clouded these days, since most NUMA systems are advertised as "SMP", but just to make a point - it is _easier_ to make an OS scale well on an SMP system vs. a NUMA system, since they behave largely the same, but NUMA does add the complexity of having to deal with variable memory latencies, cache locality, etc. A kernel that scales well on NUMA systems will scale well on a "true" SMP system as well, since an SMP system will pretty much behave like a numa system with only one node. And to take a current example, one numa node can have up to 10 cores/20 threads on an E7 Xeon. I very rarely see a linux system with _less_ than 24 cores these days, but again our customers are mainly large US businesses and government agencies, so what I see is definitely biased towards the high end. 32 & 40 cores (64/80 threads) are more common for Intel based servers, 24 & 48 cores on the AMD side. Both Solaris and Linux scale well on both "real" SMP and NUMA systems.

        And the 144 CPU sparc server was the E25k, which was a NUMA design. So is the HP Superdome. There really is not much benefit in "pure" SMP systems these days - they simply cannot scale much bigger for architectural reasons, so pretty much every modern large shared memory multiprocessor design that's being produced is a NUMA system.

        I don't even know why I bother to repeat this information to you since you seem to intentionally "forget" it and regress to your pre-2003 notions of Linux scalability and willful ignorance of modern server architectures in general.

        For what it's worth, I believe the last "true SMP" (that is, non-NUMA) mainstream servers were the dunnington based Xeon systems introduced back in 2008, which went up to 24-core configurations. IBM sold a 48-core solution, but that, again, was based around a proprietary NUMA interconnect between two 24-core nodes. All current x86 multisocket server designs are NUMA, even though they are marketed as "SMP". AMD's server CPUs actually have multiple NUMA nodes on a single package from the 6100-series an onwards, meaning even single-socket configurations are NUMA designs.

        Comment


        • #24
          Originally posted by kebabbert View Post
          I dont agree with you, as there are numerous links that does not support your claims. Let us blah:
          1) Blah Blah Blah
          2) Blah Blah Blah
          3: A croc of blah
          4: Nonsensical blah
          You're totally right. You should stick with Unix.

          My statements stand for everyone else. There is little doubt in my mind that the linux kernel is currently the most advanced and mature kernel in existence. When compared to Solaris, linux features better hardware compatibility, scaling, memory management, scheduling, and stability on commodity enterprise hardware. It's growing fast, and will likely become the standard kernel for the remainder of this age of mankind.

          Comment


          • #25
            Originally posted by TheOrqwithVagrant View Post
            This nonsense again? Do you honestly believe you're being an effective advocate for Solaris by being deliberately obtuse? I gave you a primer on the various different multiprocessor terminology, the history of shared memory system evolution, and which companies were drivers in this area several months ago. I got the impression that I initially had at least partially gotten through to you, but I was clearly mistaken - you are either unable to take in factual information if it contradicts the misinformation in your head that you believe support your bias, OR you are pretending not to have learned because you genuinely believe continuing to reiterate your misinformation is actually an effective way to do Solaris advocacy. Neither option puts you in a very good light.
            Well, my memories are a bit different. As I remembered, we talked about the Oracle M9000 server with 64 cpus. It had a latency of... 500ns or so? And then we compared to a big Linux server with 2048 cpus which was advertised as SMP, and it had a worst case latency of... a very high number. Thus, it is not SMP. To have solutions with very good numbers in local cpus, but extremely bad worst case numbers are not really interesting, because that is a SMP cluster. A cluster have very good numbers in a node, but bad numbers in nodes far away. A SMP server have ok numbers in every node, no matter how far away the nodes are. The worst case numbers are limited, and it is difficult to construct a such server with limited down side. Anyone can build a HPC cluster, no challenge.

            I also showed that those big Linux servers that are advertised as SMP such as the Altix server with 2048 cpus, are basically a HPC cluster running some kind of software that fools Linux to believe it is SMP. I also said, there are no bigger Linux servers than 64 cpus. I showed this and you agreed on this because you said those big 2048 cpu Linux servers were NUMA. And NUMA are clusters:

            "One can view NUMA as a very tightly coupled form of cluster computing."

            Thus, my recollection is totally different. No need to get upset?

            You confessed Linux servers are SMP clusters (you said they are NUMA). Are there no bigger Linux servers than 8 cpus on the market even today? Then you start to talk about number of cores, and you mix the conceptions. Of course, each cpu can have 8-10 cores. But I am not talking about how many cores, I am talking about how many sockets. Oracle and IBM has 32-64 cpus, each cpu sporting many cores just as x86 cpus. So, why are there only 8-socket Linux servers, but there are 32-64 cpu Unix servers? And please dont mix cores with cpus now, again. My question is, if Linux scales that well, why are there no 64 cpu Linux servers on the market? Or even 128 cpus? Or 256 cpus? No, the biggest linux servers are 8(?) sockets. Why?

            So, let us revive the old thread were we discussed this. Please give me a link to a post in the old thread, and I will revive that thread and answer to your latest post there. Let us stop discussing scalability in this thread.



            Back to topic, where is a list of innovative Linux tech that all OSes have ported or copied? There are none. In fact, entire Linux is a copy of Unix. There are nothing innovative in this copy. Everything is a copy. Sure, there are some improvements in Linux. But no innovations that I know of. The RCU is not needed to achieve scalability. On the other hand, DTrace is new and innovative, and most(all?) major OSes have ported or copied it. I have never heard of any Linux tech that has been hyped as much, and copied and ported. Can someone show us a list of such Linux tech?

            There are no such list? Everything in Linux is copied from the original: Unix? No innovations in Linux at all?

            Comment


            • #26
              Originally posted by russofris View Post
              You're totally right. You should stick with Unix.

              My statements stand for everyone else. There is little doubt in my mind that the linux kernel is currently the most advanced and mature kernel in existence. When compared to Solaris, linux features better hardware compatibility, scaling, memory management, scheduling, and stability on commodity enterprise hardware. It's growing fast, and will likely become the standard kernel for the remainder of this age of mankind.
              Cool. Do you have any proof or links on this? For instance, better stability?

              Comment


              • #27
                Originally posted by geearf View Post
                I hope you do realize that by calling it "Slowlaris" you lose any sort of credibility and any further argument in your block of text becomes pointless.
                No, by any means. I know the troll and you don't, so I know how to speak to trolls.

                Comment


                • #28
                  Originally posted by kebabbert View Post
                  I am not claiming that KDE is copying from CDE, however that is very likely because KDE is from 1996, and CDE is from 1993



                  However, I am claiming that KDE is not "innovative". Sure, it is fine and good. But not many OS wants it, nor has ported it. KDE is not a crucial piece of tech that is unique and new, compared to everything else. DTrace is, as is ZFS.
                  You were claiming graphic desktops aren't innovative and I'm claiming file systems and tracing tools aren't innovative. It's such simple. KDE is much more wanted than ZFS or DTrace - how many slowlaris users are there? Nearly none. There's incredibly far more KDE usage and interest than both ZFS and DTrace.

                  Maybe you didnt know, but Solaris is older than Linux. It is more probable that Solaris had tracing instruments long before Linux, how else could Solaris be developed?
                  ZFS and DTrace are younger than Solaris and following your logic older Solaris tracing instruments were copycat of other and older tools.

                  If DTrace is a copy from Linux tools, can you prove it? Where are your links? If you can not prove it, then people might take this as propaganda from you? Dont you agree that you should post links? I mean, there is nothing wrong posting propaganda or your own wishes, but then you should be clear it is propaganda, and not present it as facts. Right?
                  You spread propaganda and you didn't show proofs.

                  Well, here the Systemtap team says they copy from DTrace:

                  The Systemtap team talked lots of DTrace in early meetings and how it functions and the architecture, and then suddenly, all mentions of DTrace were deleted. Just look at my link and see yourself. This proves that Systemtap team studied DTrace to copy functionality. Now, can you prove that DTrace team copied from Linux?
                  I don't see a prove that Systemtap team copied anything from DTrace. Sadly, this proves nothing. Btw. Systemtap is just one of the Linux tracing tools. There are older like kernel probes, dprobes.

                  Comment


                  • #29
                    Originally posted by kebabbert View Post
                    As I have asked you many times, when you post your claims, it would be good if you could back them up. Otherwise people will just think you are posting Propaganda and FUD. Dont you agree?
                    While you're not backing up yours I see no reasons to do more than you. However, I'm building my arguments on logic.

                    Now you have again claimed that
                    1) Oracle is abandoning Solaris. Where is this link? You have claimed this many times.
                    It's logical thing to do. I don't have to provide links to claim that. Basing on logic everything suggests they're abandoning it. Not only RHEL is be able to scale up to 64.000CPUs and Solaris will be able to scale to 16.000 in three years (right?) thus it's years behind Linux, DTrace is being ported to Linux and while Linux has it's own profiling tools it will have huge advantage in this case, btrfs is started to being used by Oracle and Novell, so even at its current stage they consider it's stable and their decision to push forward is probably related to ZFS not being able to compete in the long run - it lost it's main developer and people who keep working at it don't have enough skills to keep it interesting in the long run. All of the Solaris advantages are being moved to Linux by company that owns Solaris. It seems Oracle wants to kill Solaris earlier than I thought.


                    2) "Linux scales crap out of slowlaris and nearly everyone knows this" - link please. Or is it just FUD from you? Sure, maybe RHEL can see 64.000 cpus, but that is just a big cluster as I have proved to you. It is not an SMP server, it is just a big cluster. There are no big SMP Linux servers. What is the biggest Linux SMP server today? Is it still 8 cpus? The Big Tux server you posted, is not sold, and as you proved Linux scaled very bad on 64 cpus, that is the reason no one sells Big Tux servers - because Linux has problems scaling on as few as 64 cpus.
                    The biggest SMP servers run Linux. SGI Altix 4096CPUs on a single system image. Even saw slowlaris on such a big monster? When comes to clusters Linux is the best, too of course. The Big Tux scaled linearly, so in the best possible way thus you're lying and spreading FUD.

                    3) RCU from IBM, is innovative you claim. Well, that is good. But as we can see, you dont need RCU to scale better than Linux. Solaris has no RCU, and still Solaris scales better than Linux. So how can RCU be innovative then?
                    Everything shows Linux scales better than Solaris, so it seems Solaris indeed needs Linux tech to scale as good as Linux. You never proved Solaris scales better, so I'm waiting for proofs. It is you who claimed first that Solaris scales better, so it's you who has to provide the proofs first.

                    4) "I can provide you a list of Linux techs that everybody wants, but you have to provide list of innovative slowlaris techs first."
                    I have already showed a list of innovative Solaris tech. DTrace, ZFS, Zones, Crossbow, etc. All those have Linux copies. For instance the DTrace is called Systemtap. The ZFS copy is called BTRFS. The Zones copy is called....The Crossbow copy is called Open vSwitch.
                    The things you mentioned aren't innovative and they're just copies of other tools and techs, so even if some of them were introduced later in Linux it rather means they're copies of tools that aren't from Solaris. How can someone copy something from closed source system? So, if someone is copying from others it's Solaris copying from Open Source systems.

                    Now it is your turn, you go ahead and show us a list of innovative and unique Linux tech that everybody has copied or ported.
                    No, because you have failed.

                    Comment


                    • #30
                      Technology Academy Finland (TAF) awards the global one-million-euro Millennium Technology Prize and runs associated events and initiatives.


                      The Millennium Technology Prize is Finland’s tribute to life-enhancing technological innovation.
                      Technology Academy Finland has today declared two prominent innovators, Linus Torvalds and Dr Shinya Yamanaka, laureates of the 2012 Millennium Technology Prize, the prominent award for technological innovation. The laureates, who will follow in the footsteps of past victors such as World Wide Web creator Sir Tim Berners-Lee, will be celebrated at a ceremony in Helsinki, Finland, on Wednesday 13 June 2012, when the winner of the Grand Prize will be announced. The prize pool exceeds EUR 1 Million.

                      Comment

                      Working...
                      X