Originally posted by piotrj3
View Post
Announcement
Collapse
No announcement yet.
Intel Publishes "X86-S" Specification For 64-bit Only Architecture
Collapse
X
-
- Likes 1
-
Originally posted by schmidtbag View PostI overall think this is fine if it helps reduce transistors. Emulation shouldn't be a big deal since you're stepping down. Besides, there aren't really many 32-bit binaries that would be demanding enough where emulation would be a bottleneck. ARM can emulate x86 with pretty decent performance despite being a very different architecture that lacks much of the instructions.
Kinda gets me to think though: since hybrid CPUs exist, what if E-cores were designed with 32-bit compatibility in mind? Then you'd get a hybrid architecture.
Originally posted by schmidtbag View PostEh, not ​quite. There are 2 reasons x86 survived as long as it did:
1. Because there are shockingly still a lot of 32-bit W7 users out there (MS really should've enforced W7 as 64-bit only)
2. Because Intel themselves continued to make 32-bit CPUs into the 2010s.
In a parallel universe without those 32-bit Atom "netbooks", Windows 10 could be 64-bit edition only.
Comment
-
Originally posted by PluMGMK View Post
Come to think of it, I'm more worried about the killing off of the 67h address-size override prefix. I can imagine that creating a lot more headaches than the limitation of 32-bit segmentation…
- Likes 1
Comment
-
Originally posted by skeevy420 View Post
Well, I very seriously doubt any non-free commercial OS older than Win11 will be updated to work with X86-S.... Linux is the same way. Unless Intel partners up with RHEL, Ubuntu, or SUSE ahead of time to backport a bunch of stuff to one of their LTS kernels you'll have to be on Fedora or Arch running a mainline kernel to even use this when it's released (and for the foreseeable future).
- Likes 2
Comment
-
-
Originally posted by ryao View PostGetting to the point where 128-bit address space is not enough would imply that we have constructed computer memory that is more than 5e11 kilograms in mass, and that assumes every bit is made out of a hydrogen atom and does not consider how we would read or write to them, or keep them in place..
- there are data structures that could be built on an assumption that VM space is enormous yet largely unpopulated. E.g. SuperMalloc allocates 512MiB vector for its internal use, and then usually only touches handful of entries into that vector. Due to that, only handful of physical pages gets allocated - its essentially a trie implemented by the hardware!
- preallocate few TBs of VM space for _every_ allocation that could possibly grow, and never have to move your data around just because you've ran out of address space.
- drop address space switching and go for a unified virtual memory view (e.g. allocate high bits of any address for a process identifier), using HW memory protections for security - I could see new IPC schemes going on here
- heck , let's add that IPv6 on top of that, and have every machine in the world share same virtual address space! No more downloading stuff; just mmap your URL and get a pointer into another server's area which you could just read.
- Likes 2
Comment
-
Originally posted by ryao View Post
Did you forget about the Cell processor? That is a hybrid ISA architecture.
- Likes 2
Comment
-
Originally posted by arteast View Post
This implies a tightly packed virtual space. I can see some uses for a really vast but sparsely populated VM space. Look at IPv6 and how it "recklessly" throws heaps of addresses at anyone just because it can. E.g. (increasingly insane yet possibly achievable):
- there are data structures that could be built on an assumption that VM space is enormous yet largely unpopulated. E.g. SuperMalloc allocates 512MiB vector for its internal use, and then usually only touches handful of entries into that vector. Due to that, only handful of physical pages gets allocated - its essentially a trie implemented by the hardware!
- preallocate few TBs of VM space for _every_ allocation that could possibly grow, and never have to move your data around just because you've ran out of address space.
- drop address space switching and go for a unified virtual memory view (e.g. allocate high bits of any address for a process identifier), using HW memory protections for security - I could see new IPC schemes going on here
- heck , let's add that IPv6 on top of that, and have every machine in the world share same virtual address space! No more downloading stuff; just mmap your URL and get a pointer into another server's area which you could just read.
The industry adopted 64-bit support because growing physical memory capacities forced them to adopt it. They are not going to adopt support for a larger address spaces for the niche use cases you have. That would waste transistors that are better spent on making better processors.
- Likes 3
Comment
-
Originally posted by billyswong View Post
In modern point of view, Cell is more like an SoC with integrated GPU/accelerator. If we call Cell a hybrid, we may call every new chips with extra VPU/NPU etc hybrid too.
Comment
-
Originally posted by ryao View Post
I had expected them to discontinue that after releasing the Mac Studio. It is nice to hear that it is still for sale. That should push back the end of amd64 support at Apple.
Apple implemented an impressive number of x86 extensions:
If software can run on Westmere, it likely can run in Rosetta 2. They also implemented CX16, which is a subset of what was to be SSE5 and had been introduced in Ivy Bridge.
Software that requires AVX, AVX2, AVX-512, RDRAND, SHA or AMX will crash, although anything depending on AVX-512 or AMX will crash on a number of recent Intel processors too. What Apple implemented is really enough to support most x86 software.
Edit: Interestingly, there is a good chance that MacOS software that crashes in Rosetta 2 because it requires something unimplemented would also crash on the 2010 Mac Pro, which used Westmere and can run MacOS 10.13. Apple dropped support for Westmere in MacOS 10.14, one release before Rosetta 2 debuted. Userland software still would have supported 10.13 when 11.0 was released, and doing that meant supporting Westmere. Rosetta 2 just barely supported the minimum in theory necessary to support all x86 software written for MacOS, excluding software that was written in a way that would be broken on Westmere. Had Apple waited any longer, they would have needed to support AVX/AVX2, which would have been more difficult to implement using NEON.
The same thing occurs between Intel and AMD systems. One implements something one way, the other does it a different way. One has super nifty shiny that half way works. The other waits and implements it correctly later on. Expectations from output can be different. Hardware bugs happen. Over dependence on a single vendor's implementation can be problematic and leads to a vendor lock in, but it can also be a problem when the same vendor suddenly decides to drop a feature if they want people to pay more for it... Like Intel and AVX-512.
- Likes 1
Comment
Comment