Announcement

Collapse
No announcement yet.

Ubuntu 24.04 Helping Achieve Greater Performance On Intel Xeon Scalable Emerald Rapids

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Ubuntu 24.04 Helping Achieve Greater Performance On Intel Xeon Scalable Emerald Rapids

    Phoronix: Ubuntu 24.04 Helping Achieve Greater Performance On Intel Xeon Scalable Emerald Rapids

    While Ubuntu 24.04 LTS won't be officially out until the back-half of April, here is an early look at how the Intel Xeon Scable "Emerald Rapids" performance is looking right now compared to Ubuntu 23.10 and the current Ubuntu 22.04 LTS series in a variety of benchmarks. As largely expected with the software updates, the new Ubuntu 24.04 LTS will help achieve greater server/HPC performance on recent Intel processors.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    It amazes me that how much performance there is left in CPUs that is not being utilized.

    You move to a new kernel, use a slightly updated compiler and you end up with tangible, and in some cases significant, performance improvements.

    This tells me that there are way too many lazy and/or incompetent programmers out there.

    Back in the day, when 1mb of ram was considered huge, it was common for programmers to optimize the hell out of their software using handcrafted assembler, in fact you would find QBASIC programs that contained assembler.

    We can see what properly coded libraries and software can do for performance with the Memcached benchmarks, it's like a completely different computer.

    Comment


    • #3
      Originally posted by sophisticles View Post
      This tells me that there are way too many lazy and/or incompetent programmers out there
      This tells me you have no clue what you're writing about. Just kidding! You're completely right. This explains why windows is such slow crap.

      Comment


      • #4
        Originally posted by Volta View Post
        This tells me you have no clue what you're writing about. Just kidding! You're completely right. This explains why windows is such slow crap.
        I will grant you that some Windows versions do feel slow.

        Going back to Win 2k, a base install would run like stink on a monkey, but with SP1 it would feel a lot slower. SP2 would feel better, and by SP4 it felt similar to what a base install felt like.

        Windows has a lot of overhead, it has a HAL, that everything, including the kernel, goes through to access the bare metal. you have the various APIs, you have all the services that run in the background that are enabled by default, and you probably have some unoptimized code, maybe for easier code maintenance, maybe at the behest of Intel in order to promote a forced upgrade cycle.

        It really makes little difference because if at least the third party applications were properly optimized, then the underlying Os makes little difference,

        X264 is a perfect example, lots of hand-coded assembler, runs like grease lighting on any OS with the faster presets.

        VP9 on the other hand, it doesn't matter what Os you use, it is painfully slow.

        I also blame the x86 architecture, I have talked to people with significant experience programming for ARM and for x86 and they vastly prefer coding for ARM.

        Comment


        • #5
          Not every programmer is god-tier speed demon, the 0.01% that are can't be expected to make all of the things. Compilers, while doing their best optimising normal code, just about always leave something on the table that the god-tier could've hand-optimised. That's fine.

          Comment


          • #6
            Originally posted by geerge View Post
            Not every programmer is god-tier speed demon, the 0.01% that are can't be expected to make all of the things. Compilers, while doing their best optimising normal code, just about always leave something on the table that the god-tier could've hand-optimised. That's fine.
            It's more than knowing how to optimize code, it's about being able to think.

            Here's a simple example that most people can easily understand, say I ask a Python programmer to write me a program that adds the first billion numbers as integers and returns the total processing rime and the answer.

            Many programmers might simply put together a for or while loop that iterates through all the numbers and adds them in a variable.

            Not bad, on my system such a loop takes about 30 seconds to complete.

            A more experienced programmer may decide to use the threading module or the multiprocessing module and end up with a loop that finished in about 24 seconds.

            Someone else may look at it and say what are these guys doing, i will use OCL, OGL or HLSL, run it on the GPU and it finished in under a minute.

            Someone else may say this is such a waste of code, just use Numpy that makes use of SSE in the background and the process will finish in about .3 seconds.

            Someone else may come along and say just use a decorator and get it to run in under .3 seconds.

            And then you have the people that paid attention in math class and decide to channel their inner Gauss and simply say:

            Sum = n/2 x (n +1)​

            And you end up with the answer instantaneously, or at least so fast that the computer can't register an execution time lapse.

            Comment


            • #7
              Originally posted by sophisticles View Post

              It's more than knowing how to optimize code, it's about being able to think.

              Here's a simple example that most people can easily understand, say I ask a Python programmer to write me a program that adds the first billion numbers as integers and returns the total processing rime and the answer.

              Many programmers might simply put together a for or while loop that iterates through all the numbers and adds them in a variable.

              Not bad, on my system such a loop takes about 30 seconds to complete.

              A more experienced programmer may decide to use the threading module or the multiprocessing module and end up with a loop that finished in about 24 seconds.

              Someone else may look at it and say what are these guys doing, i will use OCL, OGL or HLSL, run it on the GPU and it finished in under a minute.

              Someone else may say this is such a waste of code, just use Numpy that makes use of SSE in the background and the process will finish in about .3 seconds.

              Someone else may come along and say just use a decorator and get it to run in under .3 seconds.

              And then you have the people that paid attention in math class and decide to channel their inner Gauss and simply say:

              Sum = n/2 x (n +1)​

              And you end up with the answer instantaneously, or at least so fast that the computer can't register an execution time lapse.
              God-tier obviously includes using the correct algorithms for the job. Sure your average programmer with a little common sense can do n(n+1)/2 in a vacuum, maybe even the compiler can transform trivial examples, but aside from low hanging fruit the average programmer does not have a complete enough skill set.

              Comment


              • #8
                Originally posted by geerge View Post
                God-tier obviously includes using the correct algorithms for the job. Sure your average programmer with a little common sense can do n(n+1)/2 in a vacuum, maybe even the compiler can transform trivial examples, but aside from low hanging fruit the average programmer does not have a complete enough skill set.
                I agree with you on this 100%. Back in the very early days of personal computers, long before there was an internet or modem of any kind, when DARPA was still in it's early stages, computer programming was done by engineers and scientists. Even computer games were made by engineers as a hobby.

                As computers became a commodity all sorts of half-assed school spring up that taught computer programming, school like DeVry, and the reality is that you can teach most people how to program but you can't teach them how to think, especially creatively or abstractly.

                We are at a point now where Google Summer of Code features projects where the vast majority of code is submitted by high school kids.

                If i was running a project I would not accept code from anyone unless I could verify that they at a minimum a bachelors degree in computer science from state college with a strong background in mathematics and algorithms and they better have a good understanding of low level language programming.

                I'm tired of the mentality that of it compiles it ships no matter how slow or buggy it is.

                Comment


                • #9
                  Originally posted by sophisticles View Post

                  I will grant you that some Windows versions do feel slow.

                  Going back to Win 2k, a base install would run like stink on a monkey, but with SP1 it would feel a lot slower. SP2 would feel better, and by SP4 it felt similar to what a base install felt like.

                  Windows has a lot of overhead, it has a HAL, that everything, including the kernel, goes through to access the bare metal. you have the various APIs, you have all the services that run in the background that are enabled by default, and you probably have some unoptimized code, maybe for easier code maintenance, maybe at the behest of Intel in order to promote a forced upgrade cycle.

                  I also blame the x86 architecture, I have talked to people with significant experience programming for ARM and for x86 and they vastly prefer coding for ARM.
                  No, Windows always was slow. Even before their (not so new) virtualization. Let's take their I/O performance. It's slow because of NTFS and memory management. Their were playing with some NTFS replacement, but they didn't succeed so far.

                  It really makes little difference because if at least the third party applications were properly optimized, then the underlying Os makes little difference,

                  X264 is a perfect example, lots of hand-coded assembler, runs like grease lighting on any OS with the faster presets.

                  VP9 on the other hand, it doesn't matter what Os you use, it is painfully slow.
                  Application developers have to deal with many problems, so introducing the newest CPU optimization aren't on their priorities. You may say progress is slow, but it's evolution of software. Sometimes they don't even have access to the latest hardware.
                  Last edited by Volta; 10 March 2024, 09:11 AM.

                  Comment

                  Working...
                  X