Intel really wants me to configure custom kernels, don't they? Because I certainly don't need 5 level paging support. I got a measly 8GB of RAM over here. Which is 4 times more than I ever use.
Announcement
Collapse
No announcement yet.
The Linux Kernel Is Preparing To Enable 5-Level Paging By Default
Collapse
X
-
-
Originally posted by coder View PostIMO, it's all about supporting Optane DIMMs. Nonvolatile storage is the only way I see them getting to petabytes.
We are at the end of the Ghz race. 5 Ghz is the cap, and it only gets worse from here with die shrinks. The new Ghz is stacking everything as close as possible, as small as possible, with the least amount of heat as possible to the cpu core.
- Likes 1
Comment
-
Originally posted by coder View PostHow? Using some kind of sub-atomic memory technology?
Comment
-
While addressable DRAM is nowhere close to the 256TiB limit, this becomes important for memory mapping large data sets. Paging is also used for memory-mapped files, for example. There doesn't have to be physical memory to back every page entry.
- Likes 2
Comment
-
Originally posted by ThoreauHD View PostI hope not, because if that's what it's about, then they're well and truly fucked. This is a race to the cpu core, not some failed SSD crap dangling off side.
Originally posted by ThoreauHD View PostWe are at the end of the Ghz race. 5 Ghz is the cap, and it only gets worse from here with die shrinks. The new Ghz is stacking everything as close as possible, as small as possible, with the least amount of heat as possible to the cpu core.
And HBM2 isn't a simple substitute for frequency-scaling. It will barely help some workloads. For others, you'll get a one-time boost from more bandwidth or lower latency.
But, again, it can't scale to server-level capacities. So, it's really not relevant for big memory use cases.
- Likes 2
Comment
-
Originally posted by caligula View PostNot necessarily. Just more modules. Currently I think 2-3 generations of process shrinking are technically feasible. Each process shrink may double the capacity.
Originally posted by caligula View PostOn top of that you have 3d stacking
Originally posted by caligula View Posthigh end laptops could use 10 instead of 2 memory channels in the future.
Originally posted by caligula View PostIt's also possible that they'll come up with something other than DRAM, some QLC NAND / DRAM hybrid perhaps.
AFAIK, the only tech with higher density than DRAM and performance that's anywhere close is actually the 3D XPoint that Intel is branding as Optane. So, I could see a world where we have some amount of HBM2 (or similar) stacked in the CPU package - probably anywhere from 4 to 32 GB - and then your external memory is 3D XPoint. You could use the HBM as an exclusive cache, by page-faulting the same way that we do with virtual memory. Performance-wise, perhaps it makes sense to have about 4-8 times as much of this as HBM. So, that gets you to 64-512 GB (though, at an order of magnitude slower than DRAM). Maybe a couple TB, at a stretch, but probably not for laptops.
So, in 20 years, maybe there's a path to perhaps a couple TB of what looks and feels something like RAM, in your laptop. Whether there will be use cases that would justify the cost is another matter. In workstations, this might be more like 8 TB.Last edited by coder; 15 September 2019, 08:22 PM.
- Likes 2
Comment
-
Originally posted by aaronw View PostWhile addressable DRAM is nowhere close to the 256TiB limit, this becomes important for memory mapping large data sets. Paging is also used for memory-mapped files, for example.
Originally posted by aaronw View PostThere doesn't have to be physical memory to back every page entry.
Comment
-
I don't know which filesystem is used the most but there are a number of file systems that can scale. I also know XFS can scale into the petabyte range. EXT4 is not used since it doesn't do well over 16TB. Many high-performance computing systems use the Lustre filesystem (used in over 60 of the top 100 fastest supercomputers. There are also a number of other high-performance distributed filesystems available for Linux.
Comment
-
Originally posted by aaronw View PostMany high-performance computing systems use the Lustre filesystem (used in over 60 of the top 100 fastest supercomputers. There are also a number of other high-performance distributed filesystems available for Linux.
- Likes 1
Comment
Comment