Announcement

Collapse
No announcement yet.

Intel Publishes "X86-S" Specification For 64-bit Only Architecture

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    This is a great idea, I hope Intel pursues it!
    A good idea to modernize the legacy x86 architecture and get rid of old legacy cruft and make it a cleaner and leaner architecture that is more competitive with modern architectures and will allow Intel and AMD to keep making new products with better performance, higher energy efficiency and better security.

    Originally posted by cl333r View Post
    How about Intel skips this intermediate step and goes straight to ARM or RISC-V.
    That wouldn't happen, I would love for Intel to make a ARM or RISC-V CPU but migrating from x86 to ARM or RISC-V would be a major undertaking. It is much easier for Intel to add 64-bit instructions to their 32-bit instructions, as they dd, and then later remove the 32-bit instructions.

    Originally posted by hajj_3 View Post
    too late. Risc-V 64bit is going to eat x86 for breakfast.
    Nope. RISC-V is immature and poorly optimized and performs poorly. SiFive's "high-performance" $500 boards perform worse than a cheap $35 Raspberry Pi.

    Also consumer ISV and IHV like Dell, HP, Microsoft, etc are not interested in RISC-V. The only companies interested in RISC-V are hyperscales like Alibaba that see it as a way to cut costs, and companies like WD and others who want to replace ARM microcontrollers with RISC-V in order to cut costs since it is royalty-free.

    Originally posted by discordian View Post
    The only thing keeping intel/x86 relevant for decades has been backward compatibility. Intels 32bit arch failed (iapx 432), intels 64bit failed (Itanium), only IBMs decision against their engineers to pick x86 made them something.
    No, it is about making a smooth transition.
    Intel cannot just launch a new CPU with a entirely different architecture, that would fail because short-term backward compatibility is important for a smooth transition, what they can do is add 64-bit instructions which they did decades ago, then contribute 32-bit emulation code to the Linux kernel, get Microsoft to implement 32-bit emulation code in Windows, then launch a new CPU without the old 32-bit instructions and related cruft.

    For end-users, everything continues to work as usual (due to emulation) and nobody even notices the transition because it is so well planned and smoothly executed.

    Comment


    • #22
      Originally posted by cl333r View Post
      How about Intel skips this intermediate step and goes straight to ARM or RISC-V.
      That would falsely assume either ARM or RISC-V are objectively better than x86.

      Comment


      • #23
        I overall think this is fine if it helps reduce transistors. Emulation shouldn't be a big deal since you're stepping down. Besides, there aren't really many 32-bit binaries that would be demanding enough where emulation would be a bottleneck. ARM can emulate x86 with pretty decent performance despite being a very different architecture that lacks much of the instructions.
        Kinda gets me to think though: since hybrid CPUs exist, what if E-cores were designed with 32-bit compatibility in mind? Then you'd get a hybrid architecture.

        Originally posted by scottishduck View Post
        IA64 redemption arc
        Came here for this.
        Originally posted by discordian View Post
        The only thing keeping intel/x86 relevant for decades has been backward compatibility.
        Eh, not ​quite. There are 2 reasons x86 survived as long as it did:
        1. Because there are shockingly still a lot of 32-bit W7 users out there (MS really should've enforced W7 as 64-bit only)
        2. Because Intel themselves continued to make 32-bit CPUs into the 2010s.
        Last edited by schmidtbag; 20 May 2023, 08:50 AM.

        Comment


        • #24
          There was a rumor a few years ago that Intel "will throw away some old SIMD and old hardware remainders", maybe it's still in the plans too.
          According to our sources – the same who reported the development of Zen two days before official AMD public announcement – Intel is studying a new uArch in order to replace the current x86 uArchs in Desktop and Enterprise market.  TigerLake (2019) will be the last evolution step of this Core generation, started with Sandy Bridge (Developed by Haifa Team). We can say that Haswell, Skylake and Cannonlake are only main...

          Comment


          • #25
            I honestly don't see anyone mentioning the obvious, Intel can do this easily and keep 32bit compatibility by doing the big. Little design they already have. Just keep 2 or 4 of the already done efficiency cores that are currently 32bit and pair them with 64bit only CPUs, and they will be able to market to all customers they are trying to reach.

            These could be mostly turned off, so no power draw during normal usage, and still get the benefits when needed.

            I stand corrected, schmidtbag mentioned this right after I posted.
            Last edited by dragorth; 20 May 2023, 08:47 AM. Reason: Called out schmidtbag for thinking the same.

            Comment


            • #26
              So how long until Intel sabotages their X86-S specification with an Intel Atom X86-S Minus?

              dragorth Instead of big.LITTLE, my first thought was a USB3 or PCIe x86_32 accelerator card to fill in the gaps where emulation isn't good enough or for when software is expecting actual x86_32 hardware quirks (aviation/aerospace/etc). IMHO, it makes more sense to just remove a feature that (pulls number from ass) 95% of people or businesses won't use or need and to put it into an accessory device. That has the added benefit of greatly lowering manufacturing costs since backwards compatibility and two ways of doing things won't be baked into the spec. That allows for greater profitability since you get to sell two things instead of one. It's also better for the environment since you won't have 4 extra cores wasting energy just being there and doing jack shit as well as you'll only be using the resources for those who need it instead of wastefully on everyone.

              Comment


              • #27
                Most of the people who will be affect by a change like this would probably be those playing older games and they would probably work just fine performance-wise through a rosetta 2 like implementation. Most older non-game software are going to either have newer options for specific applications that probably need to be updated anyway and are probably not running on modern hardware anyway.

                Apple has shown it is possible to get reasonable performance by totally switching away from x86 and powerpc before it. Also modern software is written to be much more portable as well so the risk to just switching ISAs is much lower than 10+ years ago and people are not as attached to programs working forever anymore thanks to Phones(android and iPhone).

                Comment


                • #28
                  Originally posted by skeevy420 View Post

                  dragorth Instead of big.LITTLE, my first thought was a USB3 or PCIe x86_32 accelerator card to fill in the gaps where emulation isn't good enough or for when software is expecting actual x86_32 hardware quirks (aviation/aerospace/etc). IMHO, it makes more sense to just remove a feature that (pulls number from ass) 95% of people or businesses won't use or need and to put it into an accessory device. That has the added benefit of greatly lowering manufacturing costs since backwards compatibility and two ways of doing things won't be baked into the spec. That allows for greater profitability since you get to sell two things instead of one. It's also better for the environment since you won't have 4 extra cores wasting energy just being there and doing jack shit as well as you'll only be using the resources for those who need it instead of wastefully on everyone.
                  Wouldn't be the first time. For Motorola macs in the 90s you could get x86 cards to run pc software on. Probably best approach would be a fpga card now.

                  Comment


                  • #29
                    Only until x86_128 arrives.

                    Comment


                    • #30
                      Originally posted by skeevy420 View Post
                      dragorth Instead of big.LITTLE, my first thought was a USB3 or PCIe x86_32 accelerator card to fill in the gaps where emulation isn't good enough or for when software is expecting actual x86_32 hardware quirks (aviation/aerospace/etc). IMHO, it makes more sense to just remove a feature that (pulls number from ass) 95% of people or businesses won't use or need and to put in an accessory device. That has the added benefit of greatly lowering manufacturing costs since backwards compatibility and two ways of doing things won't be baked into the spec. That allows for greater profitability since you get to sell two things instead of one. It's also better for the environment since you won't have 4 extra cores wasting energy just being there and doing jack shit as well as you'll only be using the resources for those who need it instead of wastefully on everyone.
                      The problem with an accessory device is that it won't work for natively booting anything 32-bit, which really is the main appeal of retaining the compatibility. The secondary appeal is anyone who might still be running a Windows version older than 11, where those are highly unlikely to get 32-bit x86 emulation, so, your suggestion would be fine for that. However... OSes older than W11 don't work so well with hybrid CPUs, so... I'm not sure those OSes are going to figure out how to use an accessory CPU.

                      Comment

                      Working...
                      X