Announcement

Collapse
No announcement yet.

The Linux Kernel Is Preparing To Enable 5-Level Paging By Default

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by torsionbar28 View Post
    How are we in jeopardy of hitting this 256 TiB limit today or in the near future?
    From the documentation on 5-level paging:
    Original x86-64 was limited by 4-level paing to 256 TiB of virtual address space and 64 TiB of physical address space. We are already bumping into this limit: some vendors offers servers with 64 TiB of memory today.

    Comment


    • #12
      Originally posted by wizard69 View Post
      On the surface this sounds like absolute insanity. We are talking about 64 bit processors here, why would you need more than two levels of paging. Maybe I’m missing something (totally possible) but with the two address ranges, virtual and hardware it would seem like the goal should have been to reduce the number of paving levels.

      This is makes me wonder how this impacts ARM and Power. Makes me wonder if there is a good web site that goes into comparing addressing in the physical and virtual worlds for these processors.
      Originally posted by johannesburgel View Post
      You REALLY should google what paging is for.
      Originally posted by pkunk View Post
      Issue that 64 bit space is so huge that if you try to map it directly our with only few levels page table itself will eat up all your memory and much more.
      Actually, the comment on Power was on the mark. PowerPC uses hash-table paging so does not need to introduce a new translation layer just because that much memory could exist in a system.

      Comment


      • #13
        Originally posted by torsionbar28 View Post
        I probably don't understand how OS paging works here, but why do we need to increase from 256 TiB limit, when today, Xeon can only do 768 GiB and EPYC can do 2 TiB per socket? How are we in jeopardy of hitting this 256 TiB limit today or in the near future?
        Servers can use non-RAM storage as RAM cache. That alone is a reason to increase it to as large as is possible at any given time.

        Comment


        • #14
          Are there any CPUs that support this besides QEMU?

          Comment


          • #15
            Originally posted by torsionbar28 View Post
            I probably don't understand how OS paging works here, but why do we need to increase from 256 TiB limit, when today, Xeon can only do 768 GiB and EPYC can do 2 TiB per socket? How are we in jeopardy of hitting this 256 TiB limit today or in the near future?
            Intel Xeon Platinum support 4.5TiB of RAM see https://ark.intel.com/content/www/us...-2-70-ghz.html

            Comment


            • #16
              Originally posted by ThoreauHD View Post
              This also smells like Zen 3 hbm/3D die stacking prep to me.
              IMO, it's all about supporting Optane DIMMs. Nonvolatile storage is the only way I see them getting to petabytes.

              Comment


              • #17
                Originally posted by torsionbar28 View Post
                today, Xeon can only do 768 GiB and EPYC can do 2 TiB per socket? How are we in jeopardy of hitting this 256 TiB limit today or in the near future?
                The 8280L Xeon can allegedly support up to 4.5 TB of memory, and that's per-CPU. I believe it scales up to 8-socket configurations.

                https://ark.intel.com/content/www/us...-2-70-ghz.html

                Edit: Oops, I see someone beat me to the punch. Well, Setif 's post doesn't mention multi-socket, so I'll leave this here.
                Last edited by coder; 14 September 2019, 05:27 PM.

                Comment


                • #18
                  Originally posted by abott View Post
                  Servers can use non-RAM storage as RAM cache. That alone is a reason to increase it to as large as is possible at any given time.
                  That would only explain the virtual address space increase, but they also increased the physical address space to 4 PiB.

                  Comment


                  • #19
                    Originally posted by Space Heater View Post
                    From the documentation on 5-level paging:
                    Original x86-64 was limited by 4-level paing to 256 TiB of virtual address space and 64 TiB of physical address space. We are already bumping into this limit: some vendors offers servers with 64 TiB of memory today.
                    Ok, so with "some vendors" hitting 64 TiB of physical memory, that must mean either 14 sockets of latest Xeon Platinum, or 32 sockets of EPYC. That's a huge machine, something in the class of an HP Superdome. IME, those types of machines typically support hardware partitioning, so they rarely run a single OS instance. In my 10 years of working on Superdome (and AlphaServer GS before that) I don't think I ran into even a single customer who was running a single OS instance across all the sockets. I guess I can see the need to increase this memory limit, but we're talking very niche use case right now.
                    Last edited by torsionbar28; 14 September 2019, 09:03 PM.

                    Comment


                    • #20
                      Originally posted by chithanh View Post
                      Actually, the comment on Power was on the mark. PowerPC uses hash-table paging so does not need to introduce a new translation layer just because that much memory could exist in a system.
                      And Power ISA 3.0 (POWER9) introduced radix tree page tables because, guess what, hashed page tables suck for cache locality.

                      Comment

                      Working...
                      X