At the end, the world is much more complex than simple linear "more is more" relations.
Announcement
Collapse
No announcement yet.
Intel MPX Support Removed From GCC 9
Collapse
X
-
Originally posted by Weasel View PostIt's not fast enough until most (all?) applications start in less than 100ms as long as the CPU is at fault. Just because people tolerate shit performance doesn't mean they're "fast enough". Once upon a time we had PCs that booted instantly with MS-DOS or derivatives, web pages that rendered instantly (excluding the network traffic which depends on connection), we call them "ancient" now. This is supposed to be progress when today just feels sluggish in comparison?
I know that the operating system was so much smaller back then, but guess what? Hardware has, typically, improved far faster than the "features" we add to our software. GPUs and GPU-intensive applications are proof enough since they've actually done "progress". So why is CPU-bound software so much slower these days -- relatively speaking?
And of course, there's always power efficiency too -- anything that's faster (in terms of software) is also more power efficient.
Comment
-
Originally posted by schmidtbag View PostI see your point, but it isn't that simple. Unlike CPUs, you can just keep tacking on more cores with GPUs and you'll keep getting more performance out of it for just about any application.
As a result, die shrinks are proportionately more beneficial to GPUs too. CPUs are stagnating because improvements are much more limiting. Clock speeds are pretty much as high as they're going to get.
Adding more cores doesn't improve the performance of single-threaded tasks. Adding bigger caches or extending pipelines improves the performance of some tasks, while slowing down others.
That doesnt mean single core performance isnt important in niche fields. Rather what is being pointed out here is that single core performance doesnt diliver what mist user want.
Adding more instruction sets offer performance improvements, but only for niche calculations, and only if the application was built to use them; many devs don't use fancy instruction sets because of broken backward compatibility, either from other CPU architectures or from previous generations. Extending features like Hyper Threading (where for example we have 2 threads per core) helps with multi-tasking, but will slow down multi-threaded processes.
Interestingly the one company that actively drivs users and developers to new technology, Apple, is the one company that gets the most arrows shot at it in these forums. I see this theme reenforced every year in WWDC videos where they actively tell developers to use the APIs to exploit the latest hardware especially hardware yet to arrive. At the same time Apple is awfully careful about how far back the go with hardware support in new OS releases. Effectively they make obsolete computers that dont have the hardware required by the new software releases. Some see that as evil but as you so rightfully point out a lot of performance potential gets left on the table.
For some it might even be worthwhile to become an Apple developer to understand how Apple improves performance and manages power at yhe same time. WWDC videos can be very enlightening even if you are not Apple centric.
The problem with x86 is there's no one-size-fits-all solution to significant performance improvements. There's nothing left to change/optimize without causing regressions (whether that be in performance, efficiency, broken compatibility, or cost).
However you highlight one problem in the user community and that is the obsession with backward compatibility. Until AMD or Intel says enough and scraps all of the legacy crap in their dies the obsession with compatibility, often for software decades old, will hold I86 back.
Again contrast this with Apple and WWDC. They are effectively demanding that all app store apps for ARM be 64 bit. This is dring developers forward and getting them off the legacy mind set. It also benefits Apple as eventually they wont have to ship software for legacy apps. I also have this suspicion that Apple has one other goal and that is to pull all hardware support for ARMs 32 bit instruction set from its CPU dies.
This has all sorts of benefits for Apple if they do it. Contrast this with Intel that supports modes and functions not used for decades. This comes back to the idea that a company or organization can create a mindset where use of the latest technologies in a processor is a good thing.
Not surprisingly the Linux community has struggled greatly in my opinion with an excessive focus on legacy hardware. It is a bit hilarious that mainstream distros still have 32 bit support much less 32 bit spins. I cant even remember when 64 bit AMD hardware first came out but it has been a very very long time. As you noted above it took years to take advantage of such hardware and each archtectural addition after that.
By the way i just typed all of this in on my iPhone 4 which is still working. I dont want people to think that i commonly blow money on the latest tech. However the phone needs to be replaced soon and frankly i will buy as much phone tech as i can afford because i know that tech will be leveraged by Apple and others.
Comment
-
Originally posted by wizard69 View PostActually i expect to see a refocus on clock rates. As we hit dimensional wallls CPU designers will need higher clock rates to move forward. All it will take is success implementing a few new technologies. If we are lucky we may see 10gHz CPUs by 2020
I still dont understand the focus on singke thread performance in these discussions. Apparently people dont remember the days of single core machines with external floating Point units and crappy GPU cards. Multitasking operating systems, threaded apps and cores to run those apps have done more to create the types of computers user want than anything else.
As for things like external FPUs, that's different - they're still part of the same singular pipeline. There's a big difference between offloading a single task to another piece of hardware vs splitting a single task into multiple pieces of hardware that process it simultaneously.
The reason why servers have so many cores (and benefit from it) is because they run dozens to hundreds of individual processes at a time.
That doesnt mean single core performance isnt important in niche fields. Rather what is being pointed out here is that single core performance doesnt diliver what mist user want.
At the same time Apple is awfully careful about how far back the go with hardware support in new OS releases. Effectively they make obsolete computers that dont have the hardware required by the new software releases.
For some it might even be worthwhile to become an Apple developer to understand how Apple improves performance and manages power at yhe same time. WWDC videos can be very enlightening even if you are not Apple centric.
However you highlight one problem in the user community and that is the obsession with backward compatibility. Until AMD or Intel says enough and scraps all of the legacy crap in their dies the obsession with compatibility, often for software decades old, will hold I86 back.
Not surprisingly the Linux community has struggled greatly in my opinion with an excessive focus on legacy hardware. It is a bit hilarious that mainstream distros still have 32 bit support much less 32 bit spins. I cant even remember when 64 bit AMD hardware first came out but it has been a very very long time. As you noted above it took years to take advantage of such hardware and each archtectural addition after that.
Comment
-
Originally posted by wizard69 View PostInterestingly the one company that actively drivs users and developers to new technology, Apple, is the one company that gets the most arrows shot at it in these forums. I see this theme reenforced every year in WWDC videos where they actively tell developers to use the APIs to exploit the latest hardware especially hardware yet to arrive. At the same time Apple is awfully careful about how far back the go with hardware support in new OS releases. Effectively they make obsolete computers that dont have the hardware required by the new software releases. Some see that as evil but as you so rightfully point out a lot of performance potential gets left on the table.
For some it might even be worthwhile to become an Apple developer to understand how Apple improves performance and manages power at yhe same time. WWDC videos can be very enlightening even if you are not Apple centric.
mac has been consistently performing the worst in almost every benchmark on this site, and you're here to fanboy about Apple's performance decision? Like, for real.
Let's take the latest "offering": deprecating (and future dropping) OpenGL, by Apple. Explain to me how the fuck does this increase performance in the slightest, when a library that's not used is simply never loaded in the first place? All it does is that if you wanted to use an application that depends on it, it would simply fail to run. Something that fails to run is a performance boost?
Or are you actually complaining about the (comparatively tiny) disk space it takes up when it's never used? If you're so modern relying on such modern hardware surely some MiBs of disk space aren't such a big deal considering 90% of the OS's disk space is probably taken up by other bloated data.
Besides stability, there's only one thing more important than performance, and that is (backwards) compatibility.
Apple's decisions have nothing to do with performance. I mean they are, after all, even using the worst major compiler in terms of optimizations. Performance is not their priority one bit, so why you fanboy that?
Either way, you completely missed the point, which was to stop relying on hardware improvements that may never come and instead focus on software improvements. Freeing up some disk space that is used for "compatibility purposes" and "never loaded" is not a "software improvement" in the slightest.
Comment
-
Originally posted by wizard69 View PostActually i expect to see a refocus on clock rates. As we hit dimensional wallls CPU designers will need higher clock rates to move forward. All it will take is success implementing a few new technologies. If we are lucky we may see 10gHz CPUs by 2020
Really going way faster than 10ghz is possible for quite some time. 210 GHz has been done for radio. Biggest issue for a CPU is number of transistors and total heat generated at that speed.
5nm is planned to enter production in 2020 and 3nm is 2022. So the silicon limit is coming up quite fast. Intel at current speeds is a full step behind the leaders in nm of silicon so its going to be like 2024 before intel hits the silicon limit. Yes as we are heading to silicon limit 3nm plants are under construction and that it for silicon on going smaller. After that for silicon it structure optimisation. Of course even using carbon there is only 2 more steps max 1nm and 0.5nm. At the carbon limit of going smaller is the complete limit as there is nothing else that can make a structures smaller.
Something to consider making a high performing chip has always been racing the nm. When limit is hit things will get interesting for risc-v and other open design silicon. Also it will see change in focus to cost plant cost as well.
- Likes 2
Comment
-
Originally posted by wizard69 View PostActually i expect to see a refocus on clock rates. As we hit dimensional wallls CPU designers will need higher clock rates to move forward. All it will take is success implementing a few new technologies. If we are lucky we may see 10gHz CPUs by 2020
Comment
-
Originally posted by oiaohm View PostReally going way faster than 10ghz is possible for quite some time. 210 GHz has been done for radio. Biggest issue for a CPU is number of transistors and total heat generated at that speed.
It's vastly less power-hungry than electronics and has less issues with shrinking as it's light and not electron based.
It's a decent contender for electronics in the future as electronics reach physical limits and stagnate while optical computing eventually catches up as its physical limits are far beyond that.
But 2020 is a complete bullshit estimate.Last edited by starshipeleven; 08 June 2018, 07:32 PM.
Comment
-
Originally posted by Weasel View PostEither way, you completely missed the point, which was to stop relying on hardware improvements that may never come and instead focus on software improvements. Freeing up some disk space that is used for "compatibility purposes" and "never loaded" is not a "software improvement" in the slightest.
I think that doing such massive breaking change and forcing applications to use Metal (or MoltenVK) would end up forcing software improvement to some extent, as it is replacing an older and less efficient API with a far more efficient one. Even if they use a framework it would be better as there are frameworks that are specific to some usecase or another, while OpenGL was a one-size-fits-all kind of thing.
Killing off older software that can't adapt is also going to be good as that's likely to be not really optimized to run with modern software and hardware in mind.
Of course this won't guarantee that new software won't be written like shit, but it should give the tree a big whack, so to speak.
Doing the move like they did is a very very Apple thing to do, a gigantic "fuck you" to everyone, causing massive breakage and whining and people left out with unsupported hardware/software, but I strongly suspect that their motives do have performance and software improvement in them. Maybe not 100% of the reason, but at least a good 40% I think.Last edited by starshipeleven; 08 June 2018, 07:32 PM.
Comment
-
Originally posted by Weasel View PostYou know, hardware is paid for BY USERS, and their software is ran BY USERS, programmers are paid for by the company or whatever.
While users can spread the cost of their hardware over years, and over many different programs, and can also sell stuff used and get back some of that value.
Software developers can't, they either sell their software around a price that THE MARKET determines is "fair" or they don't sell it at all and need to go find another job.
Comment
Comment