Announcement

Collapse
No announcement yet.

A Kernel Maintainer's Prediction On The CPU Architecture Landscape For 2030

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Neuro-Chef
    replied
    Originally posted by pal666 View Post
    you have plenty of room to grow in third dimension
    Heat dissipation will be glad to hear that..

    Leave a comment:


  • Neuro-Chef
    replied
    Originally posted by skeevy420 View Post
    One thing I don't think was considered is all the lost trust in x86, because, let's face it, for desktop users the only thing x86 has going for it anymore is that it plays games better than the rest.
    x86 is also more generic in terms of booting and standards. This ARM SoC device tree thing is a PITA.

    Leave a comment:


  • Neuro-Chef
    replied
    Originally posted by c117152 View Post
    More importantly, as the x86 patents dry out there will be new x86 players
    No. Intel and AMD take care of generating enough new patents to make using the old patent free parts pointless.

    Leave a comment:


  • Space Heater
    replied
    Originally posted by Weasel View Post
    He's either a clown, or trolling.
    He's talking about the 128-bit CHERI RISC-V ISA (CHERI = Capability Hardware Enhanced RISC Instructions), which uses the extra bits for capabilities. They are not doing this to simply further increase the virtual address space.

    From their FAQ:

    we explored and developed a 128-bit compressed capability format employing fat-pointer compression techniques. This approach exploits redundancy between the two 64-bit virtual addresses representing bounds and the 64-bit pointer itself. The CHERI-128 approach retains strong C-language compatibility (e.g., out-of-bounds pointers) and retains our required security properties (e.g., monotonicity), while also achieving good microarchitectural performance (i.e., avoiding multi-cycle delays for key operations). 128-bit capabilities substantially reduce the data-cache overhead of CHERI for pointer-intensive workloads. Support for 128-bit capabilities can be found in recent versions of our CHERI FPGA prototype and also QEMU-CHERI. Our 2019 IEEE TC paper on CHERI Concentrate documents our approach in detail.

    Leave a comment:


  • c117152
    replied
    Originally posted by programmerjake View Post

    I think c117152 may have meant x86_64 specifically.
    Yup. Especially the SSE stuff is only a few months away from expiring.

    Leave a comment:


  • ALRBP
    replied
    Originally posted by s_j_newbury View Post

    It's interesting that you interpreted that by way of social and technological material constraints, ie pandemic lockdown and the end of Moore's Law, which are definitely real issues, and potentially significant and which do very much tie into what I was meaning. My point was the subject of Biophysical Economics, "Limits to Growth" and ecological collapse. The computer industry will have to adapt or die, as we all will, this is only drawn into tighter focus with the COVID-19 pandemic and particularly its effect on the energy industry. As you correctly state, quarantine doesn't directly prevent consumer spending, and has been a boon for online commerce, but that's not sustainable and isn't a model we can adopt to solve our many predicaments.
    I understand your point but, for me, the computer industry will not have that much difficulty with "Limits to Growth and ecological collapse". Silicon is abundant and energy consumption is limited. As for the pandemic, sectors like aerial transport will be much more affected by ecological issues than computing. The crisis of some sectors could even mean more profit for the computing industry and its ability to provide worldwide communication with low energy consumption (compared to physical transport).

    Now, I am absolutely not confident in any prediction on that subject, including mine. I think that the range of possible futures for humanity, even only a few decades ahead, goes from nuclear fusion removing energy constraint and education eliminating authoritarian regimes to general resources' drought and a deadly nuclear war caused by some populist dictator.

    Leave a comment:


  • programmerjake
    replied
    Originally posted by pal666 View Post
    you have vivid imagination. x86 exists since late seventies, there was plenty of time for patents to expire
    I think c117152 may have meant x86_64 specifically.

    Leave a comment:


  • s_j_newbury
    replied
    Originally posted by ALRBP View Post

    The computer industry is apart when it comes to the "bio-physical constraints we're now hitting". Quarantine does not prevent from consuming virtual things (it's the opposite) and growth in the computer power is driven by size reduction, not energy consumption increase. That said, silicon computer will definitely, and I believe quickly, hit their own physical limitation. Due to their size getting closer to atomic scale, integrated circuit components will stop to be more compact every other generation. At this point, there will still be room for optimization, but without transition to another base material than silicon, which will also have limits, a transition that could lead to serious changes in the market, the growth of computing power will slow down and stop.
    It's interesting that you interpreted that by way of social and technological material constraints, ie pandemic lockdown and the end of Moore's Law, which are definitely real issues, and potentially significant and which do very much tie into what I was meaning. My point was the subject of Biophysical Economics, "Limits to Growth" and ecological collapse. The computer industry will have to adapt or die, as we all will, this is only drawn into tighter focus with the COVID-19 pandemic and particularly its effect on the energy industry. As you correctly state, quarantine doesn't directly prevent consumer spending, and has been a boon for online commerce, but that's not sustainable and isn't a model we can adopt to solve our many predicaments.

    Leave a comment:


  • bridgman
    replied
    Originally posted by skeevy420 View Post
    Since modern x86 CPUs are said to be RISC-like at their core with additional features and whatnot added on, I wonder why AMD or Intel don't move on to RISC-V or (Open)Power and then figure out how to add on all the x86 stuff on, preferably in a modular, dual-cpu like, way so we can remove the hardware security hole if or when we don't need it. Imagine, instead of having to buy entire new systems every couple of years we could get by with buying a newer instruction set module.

    How much of your 2010-1014 hardware is still perfectly viable outside of not having AVX8675309? I feel like a plug-in interface for CPU instructions could reduce a lot of computing waste and, if they go that route, an open CPU platform is the way to go to prevent Intel SpecEx shenanigans that allow hackers to dump manure trucks behind windtunnel fans. Do we really want that much shit to hit the fan again?
    Essentially all of the security holes identified so far are related to the execution part of the CPU (ie the already-RISC part) and not to the x86 decode part AFAIK, so stripping off the x86 decode front end would not do much for security.

    Leave a comment:


  • ALRBP
    replied
    Originally posted by pal666 View Post
    you have plenty of room to grow in third dimension
    Yes but this implies increased energy at materials cost, unlike miniaturization. Also, and this is probably the main issue, more thermal dissipation and all the constraints that comes with it. A high-end CPU already dissipate about a quarter of a small electrical heater's power (>200W) ; a few years ago, I lived a (well isolated) small apartment where I shut down the main room heater due to my computer (AMD FX CPU) heating the room enough.
    Last edited by ALRBP; 31 August 2020, 11:16 AM.

    Leave a comment:

Working...
X