Announcement

Collapse
No announcement yet.

Oracle Has Yet To Clarify Solaris 11 Kernel Source

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by kraftman View Post
    No, by any means. I know the troll and you don't, so I know how to speak to trolls.
    I hope you know how to talk to ogres, fairies and elves too then!

    Comment


    • #32
      Originally posted by kebabbert View Post
      Cool. Do you have any proof or links on this? For instance, better stability?
      Yes. I have the largest (publicly known) monolithic OLTP instance in existence. RHEL, Tomcat, Oracle, Hitachi, ESX. We transitioned from Win/HPUX in 2002-2005 due to stability and scaling issues. Sun was evaluated as a replacement, but were dropped due to stability and performance issues.

      If you've ever made a transaction on a tier2 NO (including being provisioned), you've used linux.

      F

      Comment


      • #33
        I'm gonna hate myself for doing this but..

        http://www.sap.com/solutions/benchma...er.epx?num=200 -- sort by SAPs. RHEL is about 15 spaces from the top (you'll notice the top spots are all IBM's "slow" AIX mainframes). The machine used is an HP Proliant with 80/160. If you go a bit further down you'll see RHEL on a similar machine but with 40/80. The scaling is about 85%, or so.

        Here's the thread where TheOrqwithVagran was teaching you about large compute systems: http://phoronix.com/forums/showthrea...lity-etc/page6

        I hope Oracle is paying you well.

        Comment


        • #34
          Originally posted by liam View Post
          I'm gonna hate myself for doing this but..

          http://www.sap.com/solutions/benchma...er.epx?num=200 -- sort by SAPs. RHEL is about 15 spaces from the top (you'll notice the top spots are all IBM's "slow" AIX mainframes). The machine used is an HP Proliant with 80/160. If you go a bit further down you'll see RHEL on a similar machine but with 40/80. The scaling is about 85%, or so.

          Here's the thread where TheOrqwithVagran was teaching you about large compute systems: http://phoronix.com/forums/showthrea...lity-etc/page6

          I hope Oracle is paying you well.
          Just to note utilization is hardly relevant to scaling in SAP benchmarks. Some people explained that in different thread, but it's nearly impossible to find. However, it's easier to find SAP benchmarks methodology to realize this.

          Comment


          • #35
            Originally posted by geearf View Post
            I hope you know how to talk to ogres, fairies and elves too then!
            Yes, their brains are so different there's no possibility to convince them. That's why I'm not trying to convince The One.

            Comment


            • #36
              Originally posted by liam View Post
              I hope Oracle is paying you well.
              I seriously doubt it, IIRC 'kebabbert' recieved some Sun promotional gear from their swedish branch at a cristmas party many years ago as a token of appreciation, I remember this because the party was held in Barkarby which is very close to where I used to live (Viksj?). He is some uber Solaris fanboy and I see nothing wrong with that, expect as with all fanboys there's no room for objective thought which makes it pointless to discuss facts with them. Oracle probably doesn't know he exists as they really don't operate on the 'grassroots' level AFAIK.

              Comment


              • #37
                Originally posted by XorEaxEax View Post
                I seriously doubt it, IIRC 'kebabbert' recieved some Sun promotional gear from their swedish branch at a cristmas party many years ago as a token of appreciation, I remember this because the party was held in Barkarby which is very close to where I used to live (Viksj?). He is some uber Solaris fanboy and I see nothing wrong with that, expect as with all fanboys there's no room for objective thought which makes it pointless to discuss facts with them. Oracle probably doesn't know he exists as they really don't operate on the 'grassroots' level AFAIK.
                FWIW, Oracle now baits shops with OEL licenses. It's tempting, as the cost is much lower than RHEL, and it's fairly similar. It's unfortunate that the cost for us to roll out a new platform is greater than the 5 year savings gained by switching to their clone.

                Has anyone here had experience escalating issues to Oracle for OEL? How did it turn out? Are you able to compare it to RHEL support?

                F

                Comment


                • #38
                  Originally posted by kebabbert View Post
                  Well, my memories are a bit different.
                  This was not a spoken conversation; it was a thread right here on Phoronix. No need to rely on your "memories" - and clearly since your memory is faulty, perhaps it would be a good idea to go back and actually read what I said, rather than relying on them? Looks like your brain needs to switch to ZFS for it's storage - you're obviously not detecting silent corruption of data.

                  Fortunately, this is all backed up to safe offline storage and can be fetched from here, easily overwriting the corrupt data in your head with an accurate recording of what was said ->



                  Originally posted by kebabbert View Post
                  As I remembered, we talked about the Oracle M9000 server with 64 cpus.
                  I wouldn't say we "talked" about anything. I made a fairly large post early in the thread, containing mostly terminology definitions and clarifications, as well as some history about SMP and multiprocessor support in UNIX-family operating systems since the late 80's and on. I did mention the M9000 because it is SUN/Oracle's biggest SMP server, and as far as I can tell, just about the last remaining "large" non-NUMA shared memory multiprocessor server put on the market. I also made a shorter follow up post later in the thread to clarify some things you were wondering about. I did not say anything about it's memory latencies - that was someone someone called "jabl", much later in the thread.

                  Originally posted by kebabbert View Post
                  It had a latency of... 500ns or so?
                  Let's have a look at what jabl actually said, just for fun:

                  Originally posted by jabl View Post
                  On a 64 socket M9000, you have 532 ns for accessing the memory furthest away from the core, which is, I suppose reasonable for a system of that size. For comparison, the worst case latency on a

                  256 socket Altix UV is about twice that (see above), which again is quite reasonable since there's bound to be a few more router hops with that many sockets (and different network topology). But

                  look at the local memory latency: 437 ns! Ouch. Simply, ouch. Again, for comparison, on the Altix UV local memory latency is 75 ns, which is a relatively small penalty compared to a simple 2S x86

                  machine without any directory overhead and, obviously, a pretty small snoop broadcast domain. So we see that the M9000 manages to have a relatively small NUMA factor not by having some super-

                  awesome technology making remote memory access fast, but mostly by having terrible local memory latency. Not exactly something to brag about, eh.
                  Quite different from your "recollection", isn't it? I was quite surprised reading this, I remember - this means it's fairly safe to asume that for an Altix UV that's "sized" similar to a fully decked out M9000 (that is, a pretty small single-rack Altix) the "worst case" remote node latency is likely considerably LESS than even the _best case_ latency in an M9000... which makes the M9000 look _shockingly_ bad in this regard, and really highlights why no one is really bothering with "classic" SMP systems anymore. In the M9000's defense, it's core architecture is about 6 years old at this point... but still.

                  Originally posted by kebabbert View Post
                  And then we compared to a big Linux server with 2048 cpus which was advertised as SMP, and it had a worst case latency of... a very high number.
                  So much wrong here... No one has claimed the Altix systems are SMP; they are NUMA. And as for latencies, we just covered that - not a pretty picture for the "true SMP" M9000, really.

                  Originally posted by kebabbert View Post
                  Thus, it is not SMP. To have solutions with very good numbers in local cpus, but extremely bad worst case numbers are not really interesting, because that is a SMP cluster. A cluster have very good numbers in a node, but bad numbers in nodes far away. A SMP server have ok numbers in every node, no matter how far away the nodes are. The worst case numbers are limited, and it is difficult to construct a such server with limited down side. Anyone can build a HPC cluster, no challenge.
                  No one has been talking about clusters. NUMA systems are not clusters, SMP systems are not clusters. When using "clusters" in discussions on this topic, the commonly accepted meaning will be a distributed memory system with nodes connected by a network using standard networking protocols, and applications will have to be written using an API like PVM or MPI which can split the workload into parallel chunks that can run independantly on these nodes. Nowadays, thanks to hardware virtualization functions in modern CPUs, you can "emulate" a shared memory system on top of a cluster - which is what ScaleMP does - but bringing that into the discussion has about as much to do with NUMA vs SMP as mixing the performance of qemu's SPARC emulation mode into a comparison of the x86 vs. SPARC instruction set architectures.

                  Originally posted by kebabbert View Post
                  I also showed that those big Linux servers that are advertised as SMP such as the Altix server with 2048 cpus, are basically a HPC cluster running some kind of software that fools Linux to

                  believe it is SMP. I also said, there are no bigger Linux servers than 64 cpus. I showed this and you agreed on this because you said those big 2048 cpu Linux servers were NUMA. And NUMA are

                  clusters:

                  "One can view NUMA as a very tightly coupled form of cluster computing."
                  You showed nothing but your utter inability to understand the technical details of the very topic you're discussing. You yourself mentioned the 144-CPU R25k as an example of a "large SMP server", but by the definition you're now pushing, that was "just a cluster", since it's a ccNUMA design. For more fun, take an Oracle Netra SPARC T4-2; a single 4U rackmount server with 2 T4 CPUs. Is this server "just a cluster" or not? If you're going to call ccNUMA systems clusters, then Netra SPARC T4-2 is a "cluster". And for even more fun, take a single AMD Opteron 6172 CPU and hold it in your hand. What you're holding is a 2-node NUMA system. Is that processor "just a cluster"? Do you see how utterly absurd your attempts to classify ccNUMA systems as "clusters" are, now?

                  Originally posted by kebabbert View Post
                  Thus, my recollection is totally different. No need to get upset?
                  Your recollection is almost entirely inaccurate, as everyone who follows my link to the original thread can clearly see.

                  Originally posted by kebabbert View Post

                  You confessed Linux servers are SMP clusters (you said they are NUMA).
                  Nice choice of word there, "confessed", and of course an outright lie. What I said is that most servers that are "advertised" as SMP today are in fact NUMA. This goes for both "mainstream" x86 architecture servers on to most SPARC servers, the HP Superdome systems, some of the IBM pSeries systems, and so forth. Some of these systems are offered with Linux as an optional supported OS, and could thereby be classified as "Linux servers". Which of course brings us to the point that there really is no such thing as a "Linux server", since Linux will install on pretty much any server currently on the market. I suppose for the sake of argument we can define a "Linux server" as either 1. A server which is offered with Linux as a certified, supported option by the Server vendor ( the definition mentioned above) or 2. Servers which Linux is _most commonly_ run on, that is, "mainstream" servers that have Linux as their OS. Either way, your next statement completely nonsensical for any definition of "Linux server.

                  Originally posted by kebabbert View Post
                  Are there no bigger Linux servers than 8 cpus on the market even today?
                  And in addition to being nonsensical, it blatantly ignores the fact that just to humor you, I _did_ mention the 24-core Dunnington Xeon systems from 2008 to be the last "mainstream" non-NUMA SMP systems on the market. So again, people who read this only need to scroll up a couple of posts to witness just how you either willfully or pathologically ignore facts that are put right in front of you.

                  Originally posted by kebabbert View Post
                  Then you start to talk about number of cores, and you mix the conceptions. Of course, each cpu can have 8-10 cores. But I am not talking about how many cores, I am talking about how many

                  sockets. Oracle and IBM has 32-64 cpus, each cpu sporting many cores just as x86 cpus. So, why are there only 8-socket Linux servers, but there are 32-64 cpu Unix servers? And please dont mix

                  cores with cpus now, again. My question is, if Linux scales that well, why are there no 64 cpu Linux servers on the market? Or even 128 cpus? Or 256 cpus? No, the biggest linux servers are 8(?)

                  sockets. Why?
                  This is the first time you bring "sockets" into the discussion at all, and by doing so you're the one who is mixing concepts. A "CPU" is a Central Processing Unit. This will, for the discussion of SMP vs NUMA, mean a _core_, nothing else. A "processor" is a vaguer word at this point, and mainly due to the introduction of "multicore processors" taken on the meaning the "processor package". Either way, "Sockets" have VERY little bearing on anything, since what a socket actually does except for seating the ceramic package that contains an arbitrary number of CPUs. Sure, the M9000 has 32 sockets in its non-NUMA single rack configuration, but each processor in those 32 sockets is currently at most a 4-core CPU, as opposed to 8 cores for a SPARC T4, 10 cores on a Westmere-EX and 16 Cores for an Opteron 6200-series, for example. The larger pSeries are NUMA systems, so if you're going to include the 64-socket configurations of Oracle or IBM large scale server offerings, then you immediately invite the 256-socket and larger Altix systems to the party,and those ARE indisputably "Linux servers", since that is in fact the ONLY OS offered for them. And of course, this is what you should be doing; the distinction between NUMA and "true SMP" is pretty much gone these days when a single processor package is internally NUMA, and NUMA systems of the same "size" as the "true SMP" M9000 have better worst-case latencies than the M9000's _best case_ latency. Either way this has absolutely NOTHING to do with operating systems at all anymore, this is purely a comparison of the various processor and server architectures on the marktet today.

                  Originally posted by kebabbert View Post
                  So, let us revive the old thread were we discussed this. Please give me a link to a post in the old thread, and I will revive that thread and answer to your latest post there. Let us stop

                  discussing scalability in this thread.
                  Link was provided above; I find it quite telling that you're either too lazy or not competent enough to dig out the post from your own posting history . Anyway, I'm not really seeing this as a "discussion; I'm merely trying to stop you from making technically and factually incorrect statements and misusing terminology, because no productive discussion or debate can be had unless all parties to the discussion actually understand the terminology and definitions of the topic under debate. The debate itself - "linux vs solaris" - does not particularly interest me. They're both excellent *nixes at this point, and while both have their flaws, there's nothing I wouldn't trust either OS with as long as the solution was designed by a competent systems architect and is in the hands of an equally competent administrator.

                  In the previous discussion you made the mistake of immediately classifying me as a "linux proponent", presumably because I corrected you on a number of things you commonly use in your anti-linux posts. I'm not a "proponent" of anything except accurate fact-based decision making when it comes to OS choice, proper use of terminology in discussions, and factually correct statements in such discussions. I'm is a Senior Software Engineer at a large US software company, and we are not by any means a "Linux company" (we're many times bigger than the biggest "Linux company", which would be Red Hat) although we have in the last few years been increasingly using Linux "under the hood" in various products, and more importantly, our customers - the majority of which are HUGE corporations and government agencies - are increasingly using Linux.

                  Before my current job, I was a *nix consultant and worked with Linux, Solaris and AIX. I've written open-source kernel-level code that is being used and shipped in commercial products from various large companies, including SUN/Oracle. I work with large multiprocessor systems _every day_, and the last year have had quite a few instances of troubleshooting situations where software designs were made without anticipating how fast "large" shared memory multicore systems would become mainstream, or the fact that every new enterprise server coming out is now a NUMA design. This is why I get incredibly annoyed when I see your ill-informed, terminology-mangling posts and post responses to them despite the fact that it's proven about as productive as trying to hammer facts into the head of a moon-hoax believer.

                  Originally posted by kebabbert View Post
                  Back to topic, where is a list of innovative Linux tech that all OSes have ported or copied? There are none. In fact, entire Linux is a copy of Unix. There are nothing innovative in this copy.

                  Everything is a copy. Sure, there are some improvements in Linux. But no innovations that I know of. The RCU is not needed to achieve scalability. On the other hand, DTrace is new and innovative,

                  and most(all?) major OSes have ported or copied it. I have never heard of any Linux tech that has been hyped as much, and copied and ported. Can someone show us a list of such Linux tech?

                  There are no such list? Everything in Linux is copied from the original: Unix? No innovations in Linux at all?
                  This could actually be an interesting discussion. There's definitely a point to be made for a certain lack of "conceptual" innovations in the "linux world" - innovations in Linux have tended to be more in the low-level code; clever and inventive ways to implement features that originated in other UNIXes. Hower - DTrace isn't a very good example of a "Solaris innovation". DTrace really is not all that "innovative" - it is, however, the most comprehensive, well-designed, and usable implementation of what it does, which is why it has become so popular. DTrace took a lot of inspiration from Dprobes (originally developed for OS/2 - now THERE was an innovative OS! - but ported and further developed on Linux), which in turn built on OS/2's dtrace (yep, same name, even...). The creators of DTrace give due credit to these predecessors in their 2004 Usenix Paper, which you can read it here, if you're interested -> http://static.usenix.org/event/useni.../cantrill.html

                  Comment

                  Working...
                  X