Announcement

Collapse
No announcement yet.

The Linux Kernel Is Preparing To Enable 5-Level Paging By Default

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Intel really wants me to configure custom kernels, don't they? Because I certainly don't need 5 level paging support. I got a measly 8GB of RAM over here. Which is 4 times more than I ever use.

    Comment


    • #32
      Originally posted by caligula View Post
      Probably will take a similar amount of time (20 years) to reach 32 TB of RAM on every laptop. It's easy to imagine even small workstation and servers will have 10-100 times more capacity.
      How? Using some kind of sub-atomic memory technology?

      Comment


      • #33
        Originally posted by coder View Post
        IMO, it's all about supporting Optane DIMMs. Nonvolatile storage is the only way I see them getting to petabytes.
        I hope not, because if that's what it's about, then they're well and truly fucked. This is a race to the cpu core, not some failed SSD crap dangling off side.

        We are at the end of the Ghz race. 5 Ghz is the cap, and it only gets worse from here with die shrinks. The new Ghz is stacking everything as close as possible, as small as possible, with the least amount of heat as possible to the cpu core.

        Comment


        • #34
          Originally posted by coder View Post
          How? Using some kind of sub-atomic memory technology?
          Not necessarily. Just more modules. Currently I think 2-3 generations of process shrinking are technically feasible. Each process shrink may double the capacity. On top of that you have 3d stacking and high end laptops could use 10 instead of 2 memory channels in the future. It's also possible that they'll come up with something other than DRAM, some QLC NAND / DRAM hybrid perhaps.

          Comment


          • #35
            While addressable DRAM is nowhere close to the 256TiB limit, this becomes important for memory mapping large data sets. Paging is also used for memory-mapped files, for example. There doesn't have to be physical memory to back every page entry.

            Comment


            • #36
              Originally posted by ThoreauHD View Post
              I hope not, because if that's what it's about, then they're well and truly fucked. This is a race to the cpu core, not some failed SSD crap dangling off side.
              I wouldn't call it crap. First gen wasn't as durable as they claimed and it's taking them longer to scale up densities, but I wouldn't count it out. It is much faster than NAND and still more economical & denser than DRAM. I think it definitely has a place in the storage hierarchy and both Intel & Micron are (independently) moving forward with the tech.

              Originally posted by ThoreauHD View Post
              We are at the end of the Ghz race. 5 Ghz is the cap, and it only gets worse from here with die shrinks. The new Ghz is stacking everything as close as possible, as small as possible, with the least amount of heat as possible to the cpu core.
              I'm all for HBM2 or whatever, but it's pretty insane to talk about Terabytes of it stacked next to the CPU. That's not going to happen.

              And HBM2 isn't a simple substitute for frequency-scaling. It will barely help some workloads. For others, you'll get a one-time boost from more bandwidth or lower latency.

              But, again, it can't scale to server-level capacities. So, it's really not relevant for big memory use cases.

              Comment


              • #37
                Originally posted by caligula View Post
                Not necessarily. Just more modules. Currently I think 2-3 generations of process shrinking are technically feasible. Each process shrink may double the capacity.
                Okay, so I'll agree that 4-8x might be plausible.

                Originally posted by caligula View Post
                On top of that you have 3d stacking
                Node shrinks make transistors cheaper and more power-efficient - stacking does not. You do get a one-time power-efficiency dividend with stacking, but as DRAM dies still burn power, even if stacking would somehow let you have more of them (just for the sake of argument), those capacity increases would not be applicable to power-constrained use cases, like laptops.

                Originally posted by caligula View Post
                high end laptops could use 10 instead of 2 memory channels in the future.
                Only as a side-effect of HBM2, but you don't get any more capacity from doing that.

                Originally posted by caligula View Post
                It's also possible that they'll come up with something other than DRAM, some QLC NAND / DRAM hybrid perhaps.
                QLC NAND is ridiculously slow, compared to DRAM. Something like 4 orders of magnitude or more. QLC writes can be almost as slow as hard disks.

                AFAIK, the only tech with higher density than DRAM and performance that's anywhere close is actually the 3D XPoint that Intel is branding as Optane. So, I could see a world where we have some amount of HBM2 (or similar) stacked in the CPU package - probably anywhere from 4 to 32 GB - and then your external memory is 3D XPoint. You could use the HBM as an exclusive cache, by page-faulting the same way that we do with virtual memory. Performance-wise, perhaps it makes sense to have about 4-8 times as much of this as HBM. So, that gets you to 64-512 GB (though, at an order of magnitude slower than DRAM). Maybe a couple TB, at a stretch, but probably not for laptops.

                So, in 20 years, maybe there's a path to perhaps a couple TB of what looks and feels something like RAM, in your laptop. Whether there will be use cases that would justify the cost is another matter. In workstations, this might be more like 8 TB.
                Last edited by coder; 15 September 2019, 08:22 PM.

                Comment


                • #38
                  Originally posted by aaronw View Post
                  While addressable DRAM is nowhere close to the 256TiB limit, this becomes important for memory mapping large data sets. Paging is also used for memory-mapped files, for example.
                  Out of curiosity, what filesystems are typically used for this?

                  Originally posted by aaronw View Post
                  There doesn't have to be physical memory to back every page entry.
                  Yes, that's largely the distinction between physical and virtual addresses.

                  Comment


                  • #39
                    I don't know which filesystem is used the most but there are a number of file systems that can scale. I also know XFS can scale into the petabyte range. EXT4 is not used since it doesn't do well over 16TB. Many high-performance computing systems use the Lustre filesystem (used in over 60 of the top 100 fastest supercomputers. There are also a number of other high-performance distributed filesystems available for Linux.

                    Comment


                    • #40
                      Originally posted by aaronw View Post
                      Many high-performance computing systems use the Lustre filesystem (used in over 60 of the top 100 fastest supercomputers. There are also a number of other high-performance distributed filesystems available for Linux.
                      I have doubts about support for mmap(), among distributed filesystems. That's one reason I asked.

                      Comment

                      Working...
                      X