Announcement

Collapse
No announcement yet.

Intel Publishes "X86-S" Specification For 64-bit Only Architecture

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by billyswong View Post

    Most if not all CPU-intensive software in x86/x64 architecture have migrated to 64-bit. A significant number of commercial software vendors have ceased providing 32-bit binaries for their software programs. Therefore, asking OSes to handle legacy 32-bit binaries through emulation is not that difficult. 64-bit Windows have lost capacity to run 16-bit DOS/Windows binaries natively years ago.

    Or if Intel managed to keep "X86-S" capacity to run 32-bit user applications after boot but only killing the 32-bit booting or kernel part, then there may be zero obstacles.
    The latter is definitely much more preferable.

    Comment


    • #12
      Originally posted by discordian View Post
      The only thing keeping intel/x86 relevant for decades has been backward compatibility.
      And that's why I don't understand the hype around ARM / RISC-V. Maybe those who're hyped about ARM / RISC-V don't give a damn about backwards compatibility, but it's something I deeply care about.

      Comment


      • #13
        Originally posted by EvilHowl View Post
        From Intel's article:

        Cool. According to Intel, they invented AMD64. How dare them...
        Intel actually was the first one to propose 64bit architecture, and first 64 bit processor from them came 2 years earlier then Athlon64. Simply main diffrence was that Intel's italium weren't backwards compatible, so in certain way intel invented, AMD did the "good" implementation.

        Comment


        • #14
          Originally posted by piotrj3 View Post

          Intel actually was the first one to propose 64bit architecture, and first 64 bit processor from them came 2 years earlier then Athlon64. Simply main diffrence was that Intel's italium weren't backwards compatible, so in certain way intel invented, AMD did the "good" implementation.
          No, Intel wasn't first to propose 64 bit architecture. MIPS, Alpha and SPARC had 64 bit years before Intel introduced Itanium. Itanium itself wasn't even solely Intel invention - it was based on PA-RISC architecture from HP. Intel wasn't first in 64 bit x86 either. Itanium was not based on x86, it was completely different architecture only with x86 emulation. First x86 64 bit CPU was from AMD.
          Last edited by dragon321; 20 May 2023, 07:36 AM.

          Comment


          • #15
            Ah, finally. Therer were rumors about a simplification of the x86 ISA for some time now getting rid of some cruft that piled up over the past decades. I am interested in how the feedback process looks like as other x86-vendors might want to have a say or contribute some ideas for such undertaking. How open and willing is Intel to actually integrate some of the feedback they'll get? As they seem to control all the strings here, that might get controversial if some companies push for extensions or further refinements of the ISA. This news piece also deserves a deeper technical analysis.

            Comment


            • #16
              Well, it is still a shitty ISA. It just has a piece of shit less than its full counterpart. If I start listing all of x86 bad design choices, it will be a long list...

              Comment


              • #17
                Originally posted by user1 View Post

                And that's why I don't understand the hype around ARM / RISC-V. Maybe those who're hyped about ARM / RISC-V don't give a damn about backwards compatibility, but it's something I deeply care about.
                I think that's the reason - if you change architecture that means you don't really care about running software from your previous architecture or you accept worse performance by running it with emulation. For example many popular applications for macOS are already ARM native. x86 is however different - big advantage of that architecture is ability to run old software with optimal performance. If that will be removed and you will need to use emulation then why not do it on ARM or RISC-V anyway?

                Comment


                • #18
                  So it's finally happening…
                  Originally posted by billyswong View Post
                  Or if Intel managed to keep "X86-S" capacity to run 32-bit user applications after boot but only killing the 32-bit booting or kernel part, then there may be zero obstacles.
                  Haven't read the paper yet, but one of the headers seems to be about removing "32-bit Ring 0" (not Ring 3), which would mean exactly that…

                  Comment


                  • #19
                    Originally posted by discordian View Post
                    The only thing keeping intel/x86 relevant for decades has been backward compatibility.
                    To remain successful it is enough to be backward compatibility with the software of the last ~10 years. E.g., nobody needs to be hw compatibility with the dos anymore.

                    If you need older processor, the progress of the HW allows to run smoothly old software designed for old hardware in a virtualized environment.
                    I never tried, by I expected that the original DOOM works fine even in a raspebrry. Yes it has to emulate a full processor but the today arm is so faster than the original i486 that the game works fine.

                    And this is more true for windows than linux. For linux even changing the architecture is not a drama because most of the software is compiled on several architecture already.

                    Of course this is true for the 99.9% of the cases, and there are (and will be) cases where the develop process is very slow and they need a full 486 bug-to-bug compatible architecture (like medical/aerospace or military).

                    Itanium and iaxp32 are different questions: one thigh is *remain* successful, another one is become successful. To remain successful you need a "moderate" backward compatibility; to become successful, you need an power/cost ratio that compensate the non backward compatibility. These processor failed in this part.

                    Moreover in the past, the access to the hardware was not an exception, now is less frequent if not banned at all. So the HW compatibility is a lighter requirement. This to explain why Apple was and is so successful to switch from between different architectures (68000 -> powerpc -> arm).

                    Comment


                    • #20
                      Originally posted by dragon321 View Post

                      I think that's the reason - if you change architecture that means you don't really care about running software from your previous architecture or you accept worse performance by running it with emulation. For example many popular applications for macOS are already ARM native. x86 is however different - big advantage of that architecture is ability to run old software with optimal performance. If that will be removed and you will need to use emulation then why not do it on ARM or RISC-V anyway?
                      If emulation can achieve close to native performance, then I'll probably accept that. Otherwise, I want to be able to run my 10+ year old games (or not even matter how old they are) the way they always ran. From what I've seen, Apple's Rosetta 2 has the most performant x86 emulation, but if we look at Apple's track record, it removed Rosetta 1 just a few years after the switch from PowerPC to x86, so I think there's no reason to think Apple will keep Rosetta 2 forever.

                      Comment

                      Working...
                      X