Announcement

Collapse
No announcement yet.

Intel Publishes "X86-S" Specification For 64-bit Only Architecture

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Regarding this... I think it is not bad idea if there was some sort of emulation on the fly. MacOS uses rosetta to run x86 on ARM. running x86_32 on 86s, with something similar should be painless.

    What i am suspicious of, is how big improvement it can be. Let's assume that CPU core logic can be shrunk by 20% (that is very very generous claim) by removing x86 baggage. Problem is core logic is probably less then 30% of entire die (if you look at 13900k die shots). Cache is huge, GPU is huge, media engine is huge, memory controller is huge, fabric connecting everything is huge, I/O is huge and all those things don't care about architecture of CPU changes.

    Maybe there could be some energy efficiency and performance gains from dropping some logic and simplifing stuff around but i don't expect for example that 10 core CPU will become 12 core CPU by that change, that probably isn't going to happen.

    Comment


    • Originally posted by piotrj3 View Post
      What i am suspicious of, is how big improvement it can be. Let's assume that CPU core logic can be shrunk by 20% (that is very very generous claim) by removing x86 baggage. Problem is core logic is probably less then 30% of entire die (if you look at 13900k die shots). Cache is huge, GPU is huge, media engine is huge, memory controller is huge, fabric connecting everything is huge, I/O is huge and all those things don't care about architecture of CPU changes.
      I'd have to concur, to some extent. Thing is, the way the modern CPUs work, 1 frontend per every selection of hyperthreaded cores, 1 for each non-hyperthreaded core, and then 1 backend that is hopefully out-of-order kept fed by the frontend for each of those. The backend can literally be anything, so long as the frontend load, fetch, and translate are handling getting micro-ops to it. So, you are removing some portion of complexity for each one of those cores from each of the frontends. So, scraping off a little bit from a lot of places.

      As you mentioned though, the caches, the northbridge/memory controllers, the GPU, and the PCIe lanes all take a bunch of space, especially compared to the cores.

      Comment


      • Originally posted by muncrief View Post
        Intel lost the right to have any say in the future of microprocessor architecture when they tried to force Itanium on the globe, and then spent decades charging 4 to 6 times reasonable cost for pitiful two or four core microprocessors out of spite.

        In fact if not for AMD we'd be paying $4,000+ for a crappy four core Intel microprocessor at this very moment.

        Add their horrific corporate history of destroying any company or engineer who dare challenge their thievery and Intel has earned only one thing -

        The right to pound sand.
        There are so many assumptions baked into that statement. You can just as easily say, if not for AMD we would have switched from x86 a decade ago. You both don't know any of the underlying assumptions your argument relies on to be true and you are assuming Intel wouldn't have to bow to customer and market pressure, which is clearly not the case.

        Comment


        • Originally posted by piotrj3 View Post
          Regarding this... I think it is not bad idea if there was some sort of emulation on the fly. MacOS uses rosetta to run x86 on ARM. running x86_32 on 86s, with something similar should be painless.

          What i am suspicious of, is how big improvement it can be. Let's assume that CPU core logic can be shrunk by 20% (that is very very generous claim) by removing x86 baggage. Problem is core logic is probably less then 30% of entire die (if you look at 13900k die shots). Cache is huge, GPU is huge, media engine is huge, memory controller is huge, fabric connecting everything is huge, I/O is huge and all those things don't care about architecture of CPU changes.

          Maybe there could be some energy efficiency and performance gains from dropping some logic and simplifing stuff around but i don't expect for example that 10 core CPU will become 12 core CPU by that change, that probably isn't going to happen.
          There are other considerations. For all intents and purposes, each x86-64 CPU has 2 full decoder layers, the x86-64 one that translates to what the internal CPU actually uses and that one that is much simpler but still a whole separate instruction set.

          If Intel can eliminate a large part of one of those layers, the more complex one, they could do things like make individual instructions faster or have shorter distances between the layers which reduce electricity.

          Considering the power draw of Intel's last set of CPUs, anything they can do there they are probably for.

          Comment


          • Originally posted by NotMine999 View Post

            Right....Let's watch Intel repeat the design of the Commodore 128 computer that had both a MOS 8502 and a Zilog Z-80 CPU.

            If you had owned a C-128 you know how that worked out over a period of time.
            That isn't comparable at all. Those were 2 completely different archetectures, with no relations, they couldn't run each other's code.

            And Intel already has on their current Chips different processors for specific applications, cut they can mostly run the same code, and most of the industry is doing something similar.

            The environment around it, the cost of doing it, and Intel isn't also marketing a completely new paradigm at the same time like Commodore did with the Amiga while marketing the C128.

            Commodore didn't have a coherent plan; they were throwing whatever they could at the wall to see what stuck. Intel isn't doing that specific thing.

            That doesn't mean they aren't making different mistakes and they might face the same result, but they wouldn't be making the same mistake, at least.

            Comment


            • Originally posted by ayumu View Post

              This "backwards compatibility" is gone in this Intel proposal.

              An hypothetic RISC-V processor from AMD/Intel could have an usermode-only x86 implementation to accelerate legacy applications.
              It would still be backward compatible with at least 99% of userland software. It retains 32-bit support, but the kernel must be 64-bit.

              Comment


              • Originally posted by Min1123 View Post

                Yes, and I want one. But, software is not on our side for this. IBM being one of the few producers of PPC for all these years, while NXP (producer of the chip here) has been really shoving their ARM chips front and center, has produced PPC64LE as the now dominant software ecosystem for PPC. To the best of my knowledge, this chip (T2080) doesn't support PPC64LE, and that means very few Linux/BSDs make modern releases for it.

                This means using something like buildroot can put together a PPC64 build for it, but that isn't a distro. Buildroot long ago decided that compiled langauges, necessary to bootstrap a system, were not something to support for their targets, and removed them. The Debian that the page points to as PPC64 has not made a PPC64 build in so long that it's all 404s.

                That means maybe Void PPC Unofficial or somesuch. https://voidlinux-ppc.org/
                But that's deprecated and Chimera Linux also only does PPC64LE.

                This laptop has to use the hand-me-down GPUs from laptops that had GPU cards on MXM modules, and it has to use a chip that can't use most of the modern PPC64LE distros, whereas I believe Libre-SoC intends to be PPC64/PPC64LE and bring its own GPU. It'll be less powerful, but it should be cheaper and mot as reliant on tech the larger tech community has given up on in their quest to make thin and light, non-modular, lock-in machines.
                Install Gentoo.

                Comment


                • Originally posted by ryao View Post

                  That would be because Apple added hardware extensions to make performant binary translation easy:

                  Rosetta 2 is remarkably fast when compared to other x86-on-ARM emulators. I’ve spent a little time looking at how it works, out of idle curiosity, and found it to be quite unusual, so I figur…


                  If other ARM hardware adopted these extensions, everyone could have performant x86 binary translation on ARM hardware.

                  That said, Apple was still selling intel hardware until a year or two ago, and it is still building new MacOS versions for them. That should continue for at least another 4 years. It would make sense for Apple to support Rosetta 2 for at least that long.
                  Still is. The Mac Pro is still being sold in '23. Reportedly it's finally to be replaced later this year or next though.

                  That said, the M series, even with Rosetta 2 translation in both software and hardware layers, is not 100% Intel compatible. There's corner cases where older Intel Mac software will crash when it depends on something esoteric not translated. There's also not 100% compatibility between Intel and AMD, despite for most purposes they're interchangeable at the user space level. Quirks, bugs, timing differences, driver weirdness, and proprietary extensions make for interesting times if you are less than foresighted and tie yourself too tightly to one vendor's implementation - like some big name corporations do then claim the problem is "the other guy", when the real problem is their programmers are chasing after the new shiny and didn't stop to think "What happens if we need to change vendors?" or "What happens if Intel drops this new shiny extension in a couple of iterations?" or even more amusing "What if AMD or ARM (Ampere has some impressive CPUs if you need very high core counts instead of raw performance) comes up with something better?"

                  Comment


                  • Originally posted by ryao View Post

                    Install Gentoo.
                    Confession: I gave up on Gentoo back when portage broke repeatedly between the x86 and x86_64 transition.

                    Acknowledgement: It may be time for me to give it another chance.

                    Comment


                    • Originally posted by stormcrow View Post

                      Still is. The Mac Pro is still being sold in '23. Reportedly it's finally to be replaced later this year or next though.
                      I had expected them to discontinue that after releasing the Mac Studio. It is nice to hear that it is still for sale. That should push back the end of amd64 support at Apple.

                      Originally posted by stormcrow View Post
                      That said, the M series, even with Rosetta 2 translation in both software and hardware layers, is not 100% Intel compatible. There's corner cases where older Intel Mac software will crash when it depends on something esoteric not translated. There's also not 100% compatibility between Intel and AMD, despite for most purposes they're interchangeable at the user space level. Quirks, bugs, timing differences, driver weirdness, and proprietary extensions make for interesting times if you are less than foresighted and tie yourself too tightly to one vendor's implementation - like some big name corporations do then claim the problem is "the other guy", when the real problem is their programmers are chasing after the new shiny and didn't stop to think "What happens if we need to change vendors?" or "What happens if Intel drops this new shiny extension in a couple of iterations?" or even more amusing "What if AMD or ARM (Ampere has some impressive CPUs if you need very high core counts instead of raw performance) comes up with something better?"
                      Apple implemented an impressive number of x86 extensions:



                      If software can run on Westmere, it likely can run in Rosetta 2. They also implemented CX16, which is a subset of what was to be SSE5 and had been introduced in Ivy Bridge.

                      Software that requires AVX, AVX2, AVX-512, RDRAND, SHA or AMX will crash, although anything depending on AVX-512 or AMX will crash on a number of recent Intel processors too. What Apple implemented is really enough to support most x86 software.

                      Edit: Interestingly, there is a good chance that MacOS software that crashes in Rosetta 2 because it requires something unimplemented would also crash on the 2010 Mac Pro, which used Westmere and can run MacOS 10.13. Apple dropped support for Westmere in MacOS 10.14, one release before Rosetta 2 debuted. Userland software still would have supported 10.13 when 11.0 was released, and doing that meant supporting Westmere. Rosetta 2 just barely supported the minimum in theory necessary to support all x86 software written for MacOS, excluding software that was written in a way that would be broken on Westmere. Had Apple waited any longer, they would have needed to support AVX/AVX2, which would have been more difficult to implement using NEON.
                      Last edited by ryao; 20 May 2023, 11:05 PM.

                      Comment

                      Working...
                      X