Announcement

Collapse
No announcement yet.

The Linux Kernel Is Preparing To Enable 5-Level Paging By Default

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • coder
    replied
    Originally posted by ThoreauHD View Post
    I hope not, because if that's what it's about, then they're well and truly fucked. This is a race to the cpu core, not some failed SSD crap dangling off side.
    I wouldn't call it crap. First gen wasn't as durable as they claimed and it's taking them longer to scale up densities, but I wouldn't count it out. It is much faster than NAND and still more economical & denser than DRAM. I think it definitely has a place in the storage hierarchy and both Intel & Micron are (independently) moving forward with the tech.

    Originally posted by ThoreauHD View Post
    We are at the end of the Ghz race. 5 Ghz is the cap, and it only gets worse from here with die shrinks. The new Ghz is stacking everything as close as possible, as small as possible, with the least amount of heat as possible to the cpu core.
    I'm all for HBM2 or whatever, but it's pretty insane to talk about Terabytes of it stacked next to the CPU. That's not going to happen.

    And HBM2 isn't a simple substitute for frequency-scaling. It will barely help some workloads. For others, you'll get a one-time boost from more bandwidth or lower latency.

    But, again, it can't scale to server-level capacities. So, it's really not relevant for big memory use cases.

    Leave a comment:


  • aaronw
    replied
    While addressable DRAM is nowhere close to the 256TiB limit, this becomes important for memory mapping large data sets. Paging is also used for memory-mapped files, for example. There doesn't have to be physical memory to back every page entry.

    Leave a comment:


  • caligula
    replied
    Originally posted by coder View Post
    How? Using some kind of sub-atomic memory technology?
    Not necessarily. Just more modules. Currently I think 2-3 generations of process shrinking are technically feasible. Each process shrink may double the capacity. On top of that you have 3d stacking and high end laptops could use 10 instead of 2 memory channels in the future. It's also possible that they'll come up with something other than DRAM, some QLC NAND / DRAM hybrid perhaps.

    Leave a comment:


  • ThoreauHD
    replied
    Originally posted by coder View Post
    IMO, it's all about supporting Optane DIMMs. Nonvolatile storage is the only way I see them getting to petabytes.
    I hope not, because if that's what it's about, then they're well and truly fucked. This is a race to the cpu core, not some failed SSD crap dangling off side.

    We are at the end of the Ghz race. 5 Ghz is the cap, and it only gets worse from here with die shrinks. The new Ghz is stacking everything as close as possible, as small as possible, with the least amount of heat as possible to the cpu core.

    Leave a comment:


  • coder
    replied
    Originally posted by caligula View Post
    Probably will take a similar amount of time (20 years) to reach 32 TB of RAM on every laptop. It's easy to imagine even small workstation and servers will have 10-100 times more capacity.
    How? Using some kind of sub-atomic memory technology?

    Leave a comment:


  • Paul Frederick
    replied
    Intel really wants me to configure custom kernels, don't they? Because I certainly don't need 5 level paging support. I got a measly 8GB of RAM over here. Which is 4 times more than I ever use.

    Leave a comment:


  • Weasel
    replied
    Originally posted by caligula View Post
    It's only a matter of time. My first laptop had 16 MB of RAM (in 1997). Now my latest laptop has had 32 GB for almost a year now. Probably will take a similar amount of time (20 years) to reach 32 TB of RAM on every laptop. It's easy to imagine even small workstation and servers will have 10-100 times more capacity.
    There's a thing called physical limits.

    Leave a comment:


  • Weasel
    replied
    Originally posted by wizard69 View Post
    On the surface this sounds like absolute insanity. We are talking about 64 bit processors here, why would you need more than two levels of paging. Maybe I’m missing something (totally possible) but with the two address ranges, virtual and hardware it would seem like the goal should have been to reduce the number of paving levels.
    Your technical incompetence is astounding given how much bullshit you always spew about 64-bit and "removing cruft" in general.

    Leave a comment:


  • NatTuck
    replied
    Originally posted by yoshi314 View Post
    wait, how does that work, why so uneven amount of bytes?
    The Initial AMD64 spec is 4 level paging with 48-bit effective virtual addresses.
    • Pages are 4096 bytes = 2^12, so 12 bits of the address select a byte within a single page.
    • That leaves 36 bits of address. Each level of paging indexes into a 512 entry table. 512 = 2^9, so 9 bits per level.
    • 12 + 9 * 4 = 48
    • Five level paging adds another set of tables, so we go to 12 + 9 * 5 = 57.
    I doubt we would have gotten exactly this system from a completely clean design, but this is a very reasonable progression from the way 32-bit x86 two level paging works (12 + 2*10 = 32; pages were still 4k; each table was 1024 entries).

    Six level paging will be interesting. 12 + 9 * 6 = 66, so the final level of tables will only be 1/4th full. This will waste several kilobytes of memory for every process running on the system.
    Last edited by NatTuck; 09-15-2019, 10:07 AM.

    Leave a comment:


  • bug77
    replied
    I know this is about servers, but I'm going to leave this here anyway:
    All the memory I have owned over the past 20 years or so, combined, does not come close to the current limits

    Leave a comment:

Working...
X