Announcement

Collapse
No announcement yet.

Intel Publishes "X86-S" Specification For 64-bit Only Architecture

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by ssokolow View Post

    There's also a heading for "Removal of 16-bit and 32-bit protected mode" which, I think, means "Kill off the ability to run 32-bit x86 applications without some kind of emulation".
    I've started reading and noticed this point: "Using the simplified segmentation model of 64-bit for segmentation support for 32-bit applications, matching what modern operating systems already use."
    Still sounds like 32-bit applications are going to be part of this, just without the ability to create segments with arbitrary bases/limits…

    EDIT: So in that section you mentioned it clearly states: "The 32-bit submode of Intel64 (compatibility mode) still exists."
    Last edited by PluMGMK; 20 May 2023, 04:11 PM.

    Comment


    • #72
      Originally posted by sophisticles View Post

      "High-end, demanding games" and "32-bit support" is a contradiction in terms. 32-bit only allows up to 4gb addressing, either am, or file size, I defy you to show me the "high-end, demanding game" that can run on a 32-bit OS with 4gb of ram.
      30 years of working with computers had taught me to never underestimate the ability of developers to be wasteful of the CPU, regardless of how little RAM they use. If I buy an x86-family CPU, I want the only compatibility question to be "Do I run it in Wine on Linux or on actual Windows in my airgapped single-player gaming LAN?" ...not "I hope that the 'emulator' being implemented in hardware by the original vendor means that it has good enough quirks compatibility."

      Comment


      • #73
        Originally posted by PluMGMK View Post

        I've started reading and noticed this point: "Using the simplified segmentation model of 64-bit for segmentation support for 32-bit applications, matching what modern operating systems already use."
        Still sounds like 32-bit applications are going to be part of this, just without the ability to create segments with arbitrary bases/limits…

        EDIT: So in that section you mentioned it clearly states: "The 32-bit submode of Intel64 (compatibility mode) still exists."
        That "just without the ability to create segments with arbitrary bases/limits" still worries me. Never underestimate Hyrum's Law... especially in game dev.

        Comment


        • #74
          Fair enough, but is it still relevant? I know Linux allows you to do it even in 64-bit mode using modify_ldt(2), but does Win32 provide for creating these kinds of segments on a 64-bit OS? (I genuinely don't know, but I was under the impression that Microsoft had already been locking this down for the last few years…)

          Comment


          • #75
            Originally posted by ssokolow View Post

            30 years of working with computers had taught me to never underestimate the ability of developers to be wasteful of the CPU, regardless of how little RAM they use. If I buy an x86-family CPU, I want the only compatibility question to be "Do I run it in Wine on Linux or on actual Windows in my airgapped single-player gaming LAN?" ...not "I hope that the 'emulator' being implemented in hardware by the original vendor means that it has good enough quirks compatibility."
            This, I'm very skeptical of emulation efforts. As a retro gamer who still picks up 90s era titles for DOS and Windows 98se Emulation is pretty damn awful. Performance is utter garbage especially on Linux. 86box and PCEM aren't up to the task performance-wise. Even on Alder Lake hardware. It's just bizarre how a game like Wing Commander Privateer can bring a modern PC to it's knees in a program like Dosbox. E.g when there's 4-5 enemies on screen at once the emulation will crawl an i7. Whereas running natively on a 486 it runs fine. Timing issues are a massive problem in emulation as well. A lot of things run too fast or too slow and often can't be adjusted correctly. Dosbox had a really stupid one where a game would run fine in 3D but run badly during cutscenes and vice-versa. You basically had to change the emulation speed manually during the game so it wasn't too fast or too slow. Doom and Quake are bad retro games to use as examples of anything since they actually run very well on modern systems and can both be launched in Windows without any real issues. It's the other DOS/Windows 98SE games where you get really weird quirks. Especially games around the DirectX4/5/6 era, when 3DFX Voodoo cards were the main cards out, and older DOS games that are just weird in memory/timing handling. With Windows98 SE I think a lot of it could be solved by writing drivers for a Virtual Machine hypervisor to hook into the OS and accelerate 3D properly, but no one is going to do that. We've had Virtualisation for ~20 years and no one has done it.
            Last edited by DMJC; 20 May 2023, 04:31 PM.

            Comment


            • #76
              Under this proposal, those wanting to run legacy 32-bit operating systems would have to rely on virtualization
              Not emulation?

              Comment


              • #77
                Originally posted by ssokolow View Post

                That "just without the ability to create segments with arbitrary bases/limits" still worries me. Never underestimate Hyrum's Law... especially in game dev.
                Come to think of it, I'm more worried about the killing off of the 67h address-size override prefix. I can imagine that creating a lot more headaches than the limitation of 32-bit segmentation…

                Comment


                • #78
                  Originally posted by dragorth View Post
                  I honestly don't see anyone mentioning the obvious, Intel can do this easily and keep 32bit compatibility by doing the big. Little design they already have. Just keep 2 or 4 of the already done efficiency cores that are currently 32bit and pair them with 64bit only CPUs, and they will be able to market to all customers they are trying to reach.

                  These could be mostly turned off, so no power draw during normal usage, and still get the benefits when needed.

                  I stand corrected, schmidtbag mentioned this right after I posted.
                  Right....Let's watch Intel repeat the design of the Commodore 128 computer that had both a MOS 8502 and a Zilog Z-80 CPU.

                  If you had owned a C-128 you know how that worked out over a period of time.

                  Comment


                  • #79
                    Originally posted by ssokolow View Post

                    The guy who runs one of the Macintosh retrocomputing and abandonware communities I hang out in (I own a Power Mac G4 and have plans to get something capable of running System 7 and, maybe after that, a Macintosh SE) is a respectable IT guy (i.e. not a neckbeard) and he's currently in the process of deciding whether to upgrade from Windows 7 to Linux or to Apple Silicon. He hates Windows 10 and 11. (Both personally, and for the strife they've caused his customers with things like "Microsoft blue-screened all the machines by pushing a broken 'let's try to get those stubborn Windows 7 hold-outs onto Windows 10' upgrade in the middle of the night." or "All our label printers are broken. Zebra says Microsoft pushed a broken Windows 10 driver update and it's not their fault.")

                    I personally want to retain compatibility with high-end, demanding games where the developers assumed 32-bit support in Windows would be around for as "forever" as 16-bit was and so only made 32-bit builds.
                    As far as the Macintosh-Windows 7 holdout guy is concerned, if he hates Windows 10 and 11, which I totally get, then his only real choices are Free Unix or Non-Free Unix; Linux/BSD or Apple. If he has customers that he recommends software and hardware to he's probably be better off choosing Apple over anything else. That's what I'd pick. It isn't that I want to pick them, but they're Unix-based and professional enough that macOS has proper color reproduction for working with imaging software and printing. We gotta work with what's available and that's pretty much it. KDE, HDR, Wayland, and proper color reproduction has me very excited.

                    Modern processing power combined with things Wine, DOSBox, or QEMU will cover 32-bit support easily enough (assuming X86-S and not some random arch).

                    direc85 You forgot the bastard child:

                    x32; abi_x86_x32. 64/32-bit

                    That one never really took off (and would probably be a better desktop architecture).

                    Comment


                    • #80
                      Originally posted by sophisticles View Post
                      I am no AMD fanboy but I suspect that if Intel does decide to go through with this, AMD may announce a new x86, 128-bit only, processor.

                      This probably wouldn't even be that difficult to do, SSE is already a 128-bit instruction set, AVX/AVX2 is 256-bit, AVX-512 is 512-bit and Intel was supposedly working on AVX-1024 in order to combat GPGPU, I say let's just skip right to a 1024-bit CPU.
                      SSE is a mix of 64-bit scalar instructions and 128-bit vector instructions. It is not a 128-bit instruction set, and you cannot implement a CPU using only them.

                      Announcing a processor with zero backward compatibility would be a disaster for AMD, since it would only gain adoption in niche applications like the itanium processor. It is very unlikely that AMD will try to repeat Intel’s mistake. It is also very unlikely that anyone will make a 128-bit processor anytime soon since 64-bit is good enough for general purpose computing. The main reason we went to 64-bit was for larger address spaces, and it will be decades before we run into the limitations of a 64-bit address space.

                      General purpose computing will likely never implement 1024-bit scalar support. There are only 2^166 atoms on earth and the upper estimate for protons/atoms in the universe is 2^272. No matter how much technology advances, the impetus behind 64-bit computing will never manifest for 1024-bit computing. Furthermore, the future shift to 128-bit computing is likely to be the last one we make. We might support bigger vector widths or go to some other computing model, but we will never implement higher scalar widths outside of niche applications.

                      Getting to the point where 128-bit address space is not enough would imply that we have constructed computer memory that is more than 5e11 kilograms in mass, and that assumes every bit is made out of a hydrogen atom and does not consider how we would read or write to them, or keep them in place. To illustrate just how absurd that is, 2^128 hydrogen atoms would exceed the mass of every navy in the world combined, by more than a factor of 10 million. The space needed would make a US aircraft carrier look like a grain of sand. It is just not realistic to think we will ever have a need for more than 128-bit hardware address spaces.
                      Last edited by ryao; 20 May 2023, 05:13 PM.

                      Comment

                      Working...
                      X