Announcement

Collapse
No announcement yet.

Intel MPX Support Removed From GCC 9

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    At the end, the world is much more complex than simple linear "more is more" relations.

    Comment


    • #12
      Originally posted by Weasel View Post
      It's not fast enough until most (all?) applications start in less than 100ms as long as the CPU is at fault. Just because people tolerate shit performance doesn't mean they're "fast enough". Once upon a time we had PCs that booted instantly with MS-DOS or derivatives, web pages that rendered instantly (excluding the network traffic which depends on connection), we call them "ancient" now. This is supposed to be progress when today just feels sluggish in comparison?
      You make a good point about web browsers but whom is at fault here? In part i have to blame users for it is bandwidth that the user pays for that is being abusedcif more people rejected this waste alll of a sudden you would find far less complex and bloated sites. not using add blocking and trackinh stoppping software.
      I know that the operating system was so much smaller back then, but guess what? Hardware has, typically, improved far faster than the "features" we add to our software. GPUs and GPU-intensive applications are proof enough since they've actually done "progress". So why is CPU-bound software so much slower these days -- relatively speaking?
      In many ways todays hardware is fast enough for legacy software. You seldom hear people complain about text editor, or spreadsheet speed. Where hardware falls down for mainstream users is in the implementation and execution of new technology.
      And of course, there's always power efficiency too -- anything that's faster (in terms of software) is also more power efficient.
      That isnt always true. More efficient code paths save power, generally that means faster.

      Comment


      • #13
        Originally posted by schmidtbag View Post
        I see your point, but it isn't that simple. Unlike CPUs, you can just keep tacking on more cores with GPUs and you'll keep getting more performance out of it for just about any application.
        Well any GPU optimized app that is highly parallel and still has paralization potential. Even routines running on GPUs have limits to how much hardware they can exploit.
        As a result, die shrinks are proportionately more beneficial to GPUs too. CPUs are stagnating because improvements are much more limiting. Clock speeds are pretty much as high as they're going to get.
        Actually i expect to see a refocus on clock rates. As we hit dimensional wallls CPU designers will need higher clock rates to move forward. All it will take is success implementing a few new technologies. If we are lucky we may see 10gHz CPUs by 2020
        Adding more cores doesn't improve the performance of single-threaded tasks. Adding bigger caches or extending pipelines improves the performance of some tasks, while slowing down others.
        I still dont understand the focus on singke thread performance in these discussions. Apparently people dont remember the days of single core machines with external floating Point units and crappy GPU cards. Multitasking operating systems, threaded apps and cores to run those apps have done more to create the types of computers user want than anything else.

        That doesnt mean single core performance isnt important in niche fields. Rather what is being pointed out here is that single core performance doesnt diliver what mist user want.
        Adding more instruction sets offer performance improvements, but only for niche calculations, and only if the application was built to use them; many devs don't use fancy instruction sets because of broken backward compatibility, either from other CPU architectures or from previous generations. Extending features like Hyper Threading (where for example we have 2 threads per core) helps with multi-tasking, but will slow down multi-threaded processes.
        .
        Interestingly the one company that actively drivs users and developers to new technology, Apple, is the one company that gets the most arrows shot at it in these forums. I see this theme reenforced every year in WWDC videos where they actively tell developers to use the APIs to exploit the latest hardware especially hardware yet to arrive. At the same time Apple is awfully careful about how far back the go with hardware support in new OS releases. Effectively they make obsolete computers that dont have the hardware required by the new software releases. Some see that as evil but as you so rightfully point out a lot of performance potential gets left on the table.

        For some it might even be worthwhile to become an Apple developer to understand how Apple improves performance and manages power at yhe same time. WWDC videos can be very enlightening even if you are not Apple centric.
        The problem with x86 is there's no one-size-fits-all solution to significant performance improvements. There's nothing left to change/optimize without causing regressions (whether that be in performance, efficiency, broken compatibility, or cost).
        Well i have to disagree here, Intel has certainly lost its leadership role in moving I86 forward. That doesnt mean tyat somebody else could step up to the plate.

        However you highlight one problem in the user community and that is the obsession with backward compatibility. Until AMD or Intel says enough and scraps all of the legacy crap in their dies the obsession with compatibility, often for software decades old, will hold I86 back.

        Again contrast this with Apple and WWDC. They are effectively demanding that all app store apps for ARM be 64 bit. This is dring developers forward and getting them off the legacy mind set. It also benefits Apple as eventually they wont have to ship software for legacy apps. I also have this suspicion that Apple has one other goal and that is to pull all hardware support for ARMs 32 bit instruction set from its CPU dies.

        This has all sorts of benefits for Apple if they do it. Contrast this with Intel that supports modes and functions not used for decades. This comes back to the idea that a company or organization can create a mindset where use of the latest technologies in a processor is a good thing.

        Not surprisingly the Linux community has struggled greatly in my opinion with an excessive focus on legacy hardware. It is a bit hilarious that mainstream distros still have 32 bit support much less 32 bit spins. I cant even remember when 64 bit AMD hardware first came out but it has been a very very long time. As you noted above it took years to take advantage of such hardware and each archtectural addition after that.

        By the way i just typed all of this in on my iPhone 4 which is still working. I dont want people to think that i commonly blow money on the latest tech. However the phone needs to be replaced soon and frankly i will buy as much phone tech as i can afford because i know that tech will be leveraged by Apple and others.

        Comment


        • #14
          Originally posted by wizard69 View Post
          Actually i expect to see a refocus on clock rates. As we hit dimensional wallls CPU designers will need higher clock rates to move forward. All it will take is success implementing a few new technologies. If we are lucky we may see 10gHz CPUs by 2020
          Easier said than done. You can't just bend physics to your will in such a manner. In an extremely generalized sense, the smaller the transistors, the lower your max frequency can get (because higher frequencies need higher voltage, and a higher voltage promotes quantum tunneling. Quantum tunneling is also exacerbated as you shrink transistors). So sure, if we were still making 45nm parts, 10GHz on could probably be done, but they'd also be super inefficient.
          I still dont understand the focus on singke thread performance in these discussions. Apparently people dont remember the days of single core machines with external floating Point units and crappy GPU cards. Multitasking operating systems, threaded apps and cores to run those apps have done more to create the types of computers user want than anything else.
          The focus on single-threaded performance is extremely relevant, because practically no CPU-bound tasks are parallel. Even when you look at a game that uses 4 cores, it isn't doing that by splitting its workload among each core, but rather each core has a predetermined task (like one for game logic, one for particle physics, one for rendering, etc). Single-threaded CPU tasks are going to stay for a long while, and frankly, they should. Not all tasks benefit from more threads.
          As for things like external FPUs, that's different - they're still part of the same singular pipeline. There's a big difference between offloading a single task to another piece of hardware vs splitting a single task into multiple pieces of hardware that process it simultaneously.

          The reason why servers have so many cores (and benefit from it) is because they run dozens to hundreds of individual processes at a time.
          That doesnt mean single core performance isnt important in niche fields. Rather what is being pointed out here is that single core performance doesnt diliver what mist user want.
          Right, but my point is there's not a whole lot that can be done about this, in a hardware manufacturer's perspective. This leads me to your next point:
          At the same time Apple is awfully careful about how far back the go with hardware support in new OS releases. Effectively they make obsolete computers that dont have the hardware required by the new software releases.
          I agree - I actually praise Apple for how they manage to optimize their hardware so effectively. I don't think it is up to Intel or AMD or whomever to improve hardware performance. I feel devs are getting lazy and software development is getting far too abstracted and inefficient. Clear Linux or old consoles like PS2 are solid examples of how much better software can perform just by writing better code.
          For some it might even be worthwhile to become an Apple developer to understand how Apple improves performance and manages power at yhe same time. WWDC videos can be very enlightening even if you are not Apple centric.
          Well, being a closed-off ecosystem makes things a hell of a lot easier. There's a lot of variables (both literal and figurative) developers can remove to improve performance.
          However you highlight one problem in the user community and that is the obsession with backward compatibility. Until AMD or Intel says enough and scraps all of the legacy crap in their dies the obsession with compatibility, often for software decades old, will hold I86 back.
          I 100% agree, but it doesn't make financial sense to break compatibility. People got all up in arms over compatibility issues with Vista, or when Apple decided to drop floppy drives.
          Not surprisingly the Linux community has struggled greatly in my opinion with an excessive focus on legacy hardware. It is a bit hilarious that mainstream distros still have 32 bit support much less 32 bit spins. I cant even remember when 64 bit AMD hardware first came out but it has been a very very long time. As you noted above it took years to take advantage of such hardware and each archtectural addition after that.
          I don't totally disagree, but for the most part, that legacy support isn't holding back much. The problem is when modern software revolves around the needs of legacy stuff - that's when I want changes made.

          Comment


          • #15
            Originally posted by wizard69 View Post
            Interestingly the one company that actively drivs users and developers to new technology, Apple, is the one company that gets the most arrows shot at it in these forums. I see this theme reenforced every year in WWDC videos where they actively tell developers to use the APIs to exploit the latest hardware especially hardware yet to arrive. At the same time Apple is awfully careful about how far back the go with hardware support in new OS releases. Effectively they make obsolete computers that dont have the hardware required by the new software releases. Some see that as evil but as you so rightfully point out a lot of performance potential gets left on the table.

            For some it might even be worthwhile to become an Apple developer to understand how Apple improves performance and manages power at yhe same time. WWDC videos can be very enlightening even if you are not Apple centric.
            You've gotta be kidding me.

            mac has been consistently performing the worst in almost every benchmark on this site, and you're here to fanboy about Apple's performance decision? Like, for real.

            Let's take the latest "offering": deprecating (and future dropping) OpenGL, by Apple. Explain to me how the fuck does this increase performance in the slightest, when a library that's not used is simply never loaded in the first place? All it does is that if you wanted to use an application that depends on it, it would simply fail to run. Something that fails to run is a performance boost?

            Or are you actually complaining about the (comparatively tiny) disk space it takes up when it's never used? If you're so modern relying on such modern hardware surely some MiBs of disk space aren't such a big deal considering 90% of the OS's disk space is probably taken up by other bloated data.

            Besides stability, there's only one thing more important than performance, and that is (backwards) compatibility.

            Apple's decisions have nothing to do with performance. I mean they are, after all, even using the worst major compiler in terms of optimizations. Performance is not their priority one bit, so why you fanboy that?


            Either way, you completely missed the point, which was to stop relying on hardware improvements that may never come and instead focus on software improvements. Freeing up some disk space that is used for "compatibility purposes" and "never loaded" is not a "software improvement" in the slightest.

            Comment


            • #16
              Originally posted by wizard69 View Post
              Actually i expect to see a refocus on clock rates. As we hit dimensional wallls CPU designers will need higher clock rates to move forward. All it will take is success implementing a few new technologies. If we are lucky we may see 10gHz CPUs by 2020
              https://www.eetimes.com/document.asp?doc_id=1180862

              Really going way faster than 10ghz is possible for quite some time. 210 GHz has been done for radio. Biggest issue for a CPU is number of transistors and total heat generated at that speed.

              5nm is planned to enter production in 2020 and 3nm is 2022. So the silicon limit is coming up quite fast. Intel at current speeds is a full step behind the leaders in nm of silicon so its going to be like 2024 before intel hits the silicon limit. Yes as we are heading to silicon limit 3nm plants are under construction and that it for silicon on going smaller. After that for silicon it structure optimisation. Of course even using carbon there is only 2 more steps max 1nm and 0.5nm. At the carbon limit of going smaller is the complete limit as there is nothing else that can make a structures smaller.

              Something to consider making a high performing chip has always been racing the nm. When limit is hit things will get interesting for risc-v and other open design silicon. Also it will see change in focus to cost plant cost as well.

              Comment


              • #17
                Originally posted by wizard69 View Post
                Actually i expect to see a refocus on clock rates. As we hit dimensional wallls CPU designers will need higher clock rates to move forward. All it will take is success implementing a few new technologies. If we are lucky we may see 10gHz CPUs by 2020
                I put that on the shelf, to the side of clean fusion power generators. Really, optical computing technology is not anywhere near able to pull that off in this decade.

                Comment


                • #18
                  Originally posted by oiaohm View Post
                  Really going way faster than 10ghz is possible for quite some time. 210 GHz has been done for radio. Biggest issue for a CPU is number of transistors and total heat generated at that speed.
                  He is talking of optical computing. https://www.quora.com/Will-10GHz-pro...ssible-by-2020

                  It's vastly less power-hungry than electronics and has less issues with shrinking as it's light and not electron based.

                  It's a decent contender for electronics in the future as electronics reach physical limits and stagnate while optical computing eventually catches up as its physical limits are far beyond that.

                  But 2020 is a complete bullshit estimate.
                  Last edited by starshipeleven; 06-08-2018, 07:32 PM.

                  Comment


                  • #19
                    Originally posted by Weasel View Post
                    Either way, you completely missed the point, which was to stop relying on hardware improvements that may never come and instead focus on software improvements. Freeing up some disk space that is used for "compatibility purposes" and "never loaded" is not a "software improvement" in the slightest.
                    Not an Apple fan but I'm playing devil's advocate a bit here.

                    I think that doing such massive breaking change and forcing applications to use Metal (or MoltenVK) would end up forcing software improvement to some extent, as it is replacing an older and less efficient API with a far more efficient one. Even if they use a framework it would be better as there are frameworks that are specific to some usecase or another, while OpenGL was a one-size-fits-all kind of thing.

                    Killing off older software that can't adapt is also going to be good as that's likely to be not really optimized to run with modern software and hardware in mind.

                    Of course this won't guarantee that new software won't be written like shit, but it should give the tree a big whack, so to speak.

                    Doing the move like they did is a very very Apple thing to do, a gigantic "fuck you" to everyone, causing massive breakage and whining and people left out with unsupported hardware/software, but I strongly suspect that their motives do have performance and software improvement in them. Maybe not 100% of the reason, but at least a good 40% I think.
                    Last edited by starshipeleven; 06-08-2018, 07:32 PM.

                    Comment


                    • #20
                      Originally posted by Weasel View Post
                      You know, hardware is paid for BY USERS, and their software is ran BY USERS, programmers are paid for by the company or whatever.
                      The issue is that the cost of actually making a significantly better job with software development would increase too much the cost of the software. This would mean a too high price to be sold to consumers.

                      While users can spread the cost of their hardware over years, and over many different programs, and can also sell stuff used and get back some of that value.

                      Software developers can't, they either sell their software around a price that THE MARKET determines is "fair" or they don't sell it at all and need to go find another job.

                      Comment

                      Working...
                      X