Announcement

Collapse
No announcement yet.

Intel MPX Support Removed From GCC 9

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel MPX Support Removed From GCC 9

    Phoronix: Intel MPX Support Removed From GCC 9

    Support for Intel Memory Protection Extensions (MPX) is now pretty much dead on Linux...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Pity that didn't work out ... For me it was the most exciting cpu development since i960MX with tagged memory.

    Comment


    • #3
      Originally posted by pegasus View Post
      Pity that didn't work out ... For me it was the most exciting cpu development since i960MX with tagged memory.
      Not a pity at all to me. In an era where CPUs are reaching their limits what we need is to focus more on pure optimizations not safety-at-expense-of-performance features in the CPU core.

      GPUs are doing exactly this with Vulkan and the "new" low-level APIs, because it's only logical, and they haven't been stagnant like CPU performance has, so it's even weaker case there and yet they still do it. Baffles me why people tolerate shit these days to be way slower than even 15 years ago due to software bloat.

      Comment


      • #4
        CPUs have been fast enough for the past 20 years or so and only major improvements we've seen have been in the power usage. I'm saying this as someone running HPC workloads for the past decade.

        Security is hard and majority of people are unable to implement it properly. So it would be of tremendous value to average joe the customer if his devices offered security primitives (such as MPX) that were actually used by his OS and apps. But security is so hard that joe the customer cannot even comprehend its value so its easier to just go for "moar gigahetzs!!!11oneeleven". And as a consequence all the rest have to suffer.

        Or we can run openbsd on desktops, which some of us do.

        Comment


        • #5
          It's not fast enough until most (all?) applications start in less than 100ms as long as the CPU is at fault. Just because people tolerate shit performance doesn't mean they're "fast enough". Once upon a time we had PCs that booted instantly with MS-DOS or derivatives, web pages that rendered instantly (excluding the network traffic which depends on connection), we call them "ancient" now. This is supposed to be progress when today just feels sluggish in comparison?

          I know that the operating system was so much smaller back then, but guess what? Hardware has, typically, improved far faster than the "features" we add to our software. GPUs and GPU-intensive applications are proof enough since they've actually done "progress". So why is CPU-bound software so much slower these days -- relatively speaking?

          And of course, there's always power efficiency too -- anything that's faster (in terms of software) is also more power efficient.

          Comment


          • #6
            Originally posted by Weasel View Post
            I know that the operating system was so much smaller back then, but guess what? Hardware has, typically, improved far faster than the "features" we add to our software. GPUs and GPU-intensive applications are proof enough since they've actually done "progress". So why is CPU-bound software so much slower these days -- relatively speaking?
            Because CPU hardware is improving at a steady pace of around 5% per generation (in optimistic benchmarks) for the last decade, while software bloat increase has kept the same pace it had when CPU performance doubled every other year.

            This is especially apparent with bullshit compan-grade software, where managers think that programmers should be paid less than janitors.
            Last edited by starshipeleven; 08 June 2018, 09:03 AM.

            Comment


            • #7
              You are
              1) confusing performance with latency and
              2) barking at the wrong tree.
              CPU performance has very little to do with "applications start in less than 100ms". That's more a domain of software stack used by those applications. If you want that kind of experience, check out menuetos.net, kolibrios.org and similar projects. Also plain Xorg with its apps (xterm, xclock, xbill etc) is plenty fast. Want visual bling? That will cost you.

              Also performance these days is measured in programmer productivity, since they are the component of the whole chain that you can't easily scale. This is the primary motivation for "more bloated" software stacks. You know, hardware is cheap, people are not.

              Btw, for properly done web sites I can recommend you these guys: http://www.aptivate.org

              And see also https://www.google.com/search?q=cpu+...erformance+gap ... For homework, put together a https://en.wikipedia.org/wiki/Roofline_model for 486 and modern i7 and figure out which one is *better suited* (not faster) for your problem.

              Comment


              • #8
                Originally posted by pegasus View Post
                Also performance these days is measured in programmer productivity
                And that's exactly the problem.

                Originally posted by pegasus View Post
                You know, hardware is cheap, people are not.
                You know, hardware is paid for BY USERS, and their software is ran BY USERS, programmers are paid for by the company or whatever.

                I couldn't care less of their internal software stack. There, it can be as bloated and slow as they want. I'm talking about what is distributed, this logic is absurd and is the reason the stack these days is so dreadfully awful.

                Comment


                • #9
                  Originally posted by Weasel View Post
                  And that's exactly the problem.

                  While you are at it, can I also ask you to solve hunger in Africa, world inequality and world peace? Thanks.

                  Users "vote" with their wallets, that's why we have what we have now. You can chose to vote smart and simply stop using apps that you don't like for *whatever* reason. Like I did with my switch to OpenBSD. Simple nuc with 6W cpu is plenty fast for all I need it for.

                  Comment


                  • #10
                    Originally posted by Weasel View Post
                    GPUs are doing exactly this with Vulkan and the "new" low-level APIs, because it's only logical, and they haven't been stagnant like CPU performance has, so it's even weaker case there and yet they still do it. Baffles me why people tolerate shit these days to be way slower than even 15 years ago due to software bloat.
                    I see your point, but it isn't that simple. Unlike CPUs, you can just keep tacking on more cores with GPUs and you'll keep getting more performance out of it for just about any application. As a result, die shrinks are proportionately more beneficial to GPUs too. CPUs are stagnating because improvements are much more limiting. Clock speeds are pretty much as high as they're going to get. Adding more cores doesn't improve the performance of single-threaded tasks. Adding bigger caches or extending pipelines improves the performance of some tasks, while slowing down others. Adding more instruction sets offer performance improvements, but only for niche calculations, and only if the application was built to use them; many devs don't use fancy instruction sets because of broken backward compatibility, either from other CPU architectures or from previous generations. Extending features like Hyper Threading (where for example we have 2 threads per core) helps with multi-tasking, but will slow down multi-threaded processes.

                    The problem with x86 is there's no one-size-fits-all solution to significant performance improvements. There's nothing left to change/optimize without causing regressions (whether that be in performance, efficiency, broken compatibility, or cost).
                    Last edited by schmidtbag; 08 June 2018, 10:32 AM.

                    Comment

                    Working...
                    X