Announcement

Collapse
No announcement yet.

OpenIndiana 2018.10 Released With MATE 1.20 Desktop, GCC 8 & Python 3.5 Support

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by aht0 View Post
    Better vertical scaling
    I like Illumos, but are there semi-modern benchmarks (from the last ~3 years) that show Illumos as the clear victor in vertical scaling?

    Comment


    • #12
      Originally posted by OMTDesign View Post

      That's interesting to hear. Are you using the latest version with MATE? Does OpenIndiana include the package mate tweak in its repos?
      At the moment I'm using awesome-wm from Hipster's IPS repo, though I've used MATE, Notion, FVWM and Enlightenement (always from OI's repo), fluxbox, icewm, XFCE4, WindowMaker and others (from Joyent's pkgsrc binary repo for Illumos). I don't know about mate-tweak, but you can obviously change GTK2/3 theme, titlebars, icons and fonts from MATE Control Center

      Best regards

      Comment


      • #13
        Originally posted by Mifune View Post
        OpenIndiana has been main desktop platform for years
        Just curious, what desktop hardware are you using?

        Comment


        • #14
          Originally posted by Space Heater View Post
          Just curious, what desktop hardware are you using?
          2 Laptops, working shamelessly, including wifi, S3 suspending-resuming, battery life even better than Linux:
          Toshiba Portége R600 (Intel Centrino 2, Core 2 Duo SU9400)
          Dell Latitude E6320 (Intel vPro, i5 2nd gen)

          + Sun Blade 1500 (UltraSPARC IIi) running Tribblix

          As you can see, all pretty old stuff, but that's what I happen to own and what I use as daily driver; Illumos can run well on newer hardware (especially some Lenovo, HP, Toshiba, Dell, Samsung laptops up to 2015 or so) especially after drivers updates carried out in 2017, but it's everyday rarer to see new models being supported. Generally speaking supported hardware include:

          - Most Intel graphics up to Haswell (+ some Broadwell); all Nvidia GPUs up to 7xx but Optimus (support up to 9xx is experimental); no AMD Radeon support outside non-accelerated vgatext driver
          - USB 3.0
          - most SSDs + some NVMe controllers
          - several older wifi chipsets, especially Atheros, Intel, Ralink and Realtek + some old broadcom with NDIS drivers

          UEFI boot is another recent addition



          Comment


          • #15
            Originally posted by Space Heater View Post

            I like Illumos, but are there semi-modern benchmarks (from the last ~3 years) that show Illumos as the clear victor in vertical scaling?
            I think the latest serious piece of writing on the topic is Systems Performance: Enterprise and the Cloud by Brendan Gregg, which is dated 2013. Interesting to notice though that Gregg himself abandoned Illumos and moved from Joyent to Netflix in 2014, presumably to work on Linux and FreeBSD; meanwhile he wrote a lot on Linux performance which at the current state he claims to be unrivaled, and now he's spreading FUD on Illumos all over Internet

            Comment


            • #16
              Originally posted by Michael_S View Post
              Serious (not snarky) question: what is the sales pitch for OpenIndiana vs Linux or one of the *BSD projects? What does it do better?
              Solaris has high quality code, and is better engineered. Linux has quite some problems with it's code:
              https://en.wikipedia.org/wiki/Critic...el_performance

              Solaris has also been running on large 32- and 64-cpu servers for decades, so Solaris scales way better. It is more stable and faster on large workloads.

              Until last year, the largest Linux server has been 8-cpus. This means that no one could optimize Linux for 16-cpu servers - because they did not exist.

              Clarification: When I talk about scalability, I talk about scale-up. Scale-up is a single large server with as many as 16, 32 or even 64-cpus. This arena has always belonged to Mainframes or large RISC servers. These servers are very very very very expensive. For instance, IBM P595 with 32-cpus used for the old TPC-C record, costed $35 million. No typo. Large business workloads (SAP, OLTP databases, etc) can only be run on scale-up servers. One large SAP installation can cost several $100 millions, so there are lot of money in large scale-up server market segment. Everybody tries to go there, including SGI.

              Scale-out scalability is something else. It is a large cluster. Linux has always excelled at scale-out clusters, such as SGI UV-3000 server with 10.000s of cores, or SGI Altix. Or ScaleMP server with 10.000s of cores. Or supercomputers, which are basically a bunch of PCs on a fast network. These clusters are cheap. They can also only run HPC number crunching workloads. Business workloads such as SAP are not possible to run efficiently on clusters. If you look at customer use cases for large Linux servers such as UV3000, it is always about HPC number crunching analysis running for weeks at end.

              The reason scale-out servers cannot run business workloads, is because business code branch heavily. Business servers serve 1000s of clients simultaneously. One is doing accounting, another doing pay role salaries, etc. So there are lot of work going on, on the business server. All this different data never fits into cpu cache. So server needs to go out to RAM all the time. (x86 has awful RAM throughput speed, RISC servers are twice as fast or more). And RAM is 100ns or so, that corresponds to 10 MHz cpu. That is slow. And if the data is in another node on the network, latency will be worse, maybe 500 ns(?) to fetch the data. That means the data will be processed with a speed of 2 MHz cpu. Therefore a business workloads cannot run on clusters, all data must stay on the same server. That is, max 32- or 64-cpus. If you have more cpus, then latency will be too bad. Because of cpu topology.

              OTOH, clusters running HPC workloads are different. They typically run an equation solver on the same set of grid, over and over again, iterating in time. This means all data fits into cpu cache. There are not much communication going on between the nodes. No heavy branching of the code. So HPC workloads can run separately on each node at full speed.

              Comment


              • #17
                Originally posted by pavlerson View Post
                Clarification: When I talk about scalability, I talk about scale-up. Scale-up is a single large server with as many as 16, 32 or even 64-cpus. This arena has always belonged to Mainframes or large RISC servers. These servers are very very very very expensive. For instance, IBM P595 with 32-cpus used for the old TPC-C record, costed $35 million. No typo. Large business workloads (SAP, OLTP databases, etc) can only be run on scale-up servers. One large SAP installation can cost several $100 millions, so there are lot of money in large scale-up server market segment. Everybody tries to go there, including SGI.

                Scale-out scalability is something else. It is a large cluster. Linux has always excelled at scale-out clusters, such as SGI UV-3000 server with 10.000s of cores, or SGI Altix. Or ScaleMP server with 10.000s of cores. Or supercomputers, which are basically a bunch of PCs on a fast network. These clusters are cheap. They can also only run HPC number crunching workloads. Business workloads such as SAP are not possible to run efficiently on clusters. If you look at customer use cases for large Linux servers such as UV3000, it is always about HPC number crunching analysis running for weeks at end.
                This "beast" look more like a mainframe to me
                Last edited by onicsis; 10-25-2018, 01:14 PM.

                Comment


                • #18
                  Originally posted by onicsis View Post
                  This beast look more like a mainframe to me
                  It's not.
                  Preface
                  This IBM® Redpaper is a comprehensive guide describing the IBM Power 595 (9119-FHA) enterprise-class IBM Power Systems server.
                  http://www.redbooks.ibm.com/redpapers/pdfs/redp4440.pdf

                  Further reading reveals it's a system with up to 8 CPU's tops. Look at Page 3 of that same pdf file.
                  Last edited by aht0; 10-28-2018, 12:06 PM.

                  Comment

                  Working...
                  X