Originally posted by Weasel
View Post
Announcement
Collapse
No announcement yet.
A Kernel Maintainer's Prediction On The CPU Architecture Landscape For 2030
Collapse
X
-
Originally posted by ALRBP View PostThat said, silicon computer will definitely, and I believe quickly, hit their own physical limitation. Due to their size getting closer to atomic scale, integrated circuit components will stop to be more compact every other generation. At this point, there will still be room for optimization, but without transition to another base material than silicon, which will also have limits, a transition that could lead to serious changes in the market, the growth of computing power will slow down and stop.
- Likes 2
Leave a comment:
-
Originally posted by Weasel View PostHe's either a clown, or trolling.
For a long time, I have been involved in the 'word-length-growth' of processors at a more technical level than most, starting with Intel's '4004 / 4040' and Japanese 4-bit CPUs, to where we are now.
There was always an upward pressure, early on, so that, aside from the need to address more memory, more instructions could be accommodated by one word; an 8-bit word (field) for the Instruction Register allows for 2**8, or 256 BASIC instructions (let's forget the pedantic, artificial definition of 'word'). That 'problem' was solved completely with a 32-bit word's being able to contain not only the op-code for somewhat more than 256 instructions (how many basic instructions does one really need? How much memory do you HAVE to have?), but also any references to any of the machine's registers or memory locations (a quick 'sanity check': 2**32 = 4 Gigabytes--1 KB=1024 B) which might be dictated or implied by the instruction.
The higher resolution and speed of conversion guaranteed by a 128-bit word-length used in analog-digital-analog applications is not even a valid reason--almost all A / D and D /A applications I have ever encountered are inherently limited to 32 bits--and usually much less--by the physical constraints of the processes themselves.
.
I would tend to lean more towards the fact that the pressure to move to 128-bit processors is due more to the 'mountain-climber's syndrome' than anything else: "...because it's there...". Or the 'hacker's syndrome: "...because I can..."; but I would really like to get another well-thought-out (actually, a lot of them) opinion on this subject.
Leave a comment:
-
Originally posted by skeevy420 View PostOne thing I don't think was considered is all the lost trust in x86
Originally posted by skeevy420 View Postfor desktop users the only thing x86 has going for it anymore is that it plays games better than the rest. But thanks to Intel, x86 isn't much of a trusted platform these days so POWER has the potential to pull in x86 users that don't want to go to ARM.
- Likes 1
Leave a comment:
-
Originally posted by Weasel View PostHe's either a clown, or trolling.
- Likes 1
Leave a comment:
-
Originally posted by s_j_newbury View PostI expect we'll be in an economic depression for the next decade at least, until a new economic system emerges which will be adapted to operate within the context of the converging bio-physical constraints we're now hitting.
To my mind, this will mean a lot of repurposing of existing tech with much less focus of new hardware on consumer products due to unaffordability, while high end will look to the best value performance options which may provide an opportunity for RISC-V and POWER with the "consumer subsidy" removed.
- Likes 1
Leave a comment:
-
I don't think that was much of a sharp-minded prediction at all.
He did not touch the more interesting subjects at all.
Oh well...
- Likes 2
Leave a comment:
-
One thing I don't think was considered is all the lost trust in x86, because, let's face it, for desktop users the only thing x86 has going for it anymore is that it plays games better than the rest. But thanks to Intel, x86 isn't much of a trusted platform these days so POWER has the potential to pull in x86 users that don't want to go to ARM. It'll be interesting to see what happens once Wine & Hangover become "gamer ready" on ARM and POWER because those are the two most primed to take x86's desktop/workstation spot.
For servers, the architecture is more moot since we're talking about Linux and how, for the most part, Linux is pretty architecture agnostic and runs the same everywhere. They'll pick an architecture based on being good at either low-power or high speed computing depending on their needs...which makes me wonder:
Since modern x86 CPUs are said to be RISC-like at their core with additional features and whatnot added on, I wonder why AMD or Intel don't move on to RISC-V or (Open)Power and then figure out how to add on all the x86 stuff on, preferably in a modular, dual-cpu like, way so we can remove the hardware security hole if or when we don't need it. Imagine, instead of having to buy entire new systems every couple of years we could get by with buying a newer instruction set module.
How much of your 2010-1014 hardware is still perfectly viable outside of not having AVX8675309? I feel like a plug-in interface for CPU instructions could reduce a lot of computing waste and, if they go that route, an open CPU platform is the way to go to prevent Intel SpecEx shenanigans that allow hackers to dump manure trucks behind windtunnel fans. Do we really want that much shit to hit the fan again?
- Likes 4
Leave a comment:
Leave a comment: