Announcement

Collapse
No announcement yet.

Intel Gets Back To Working On Linear Address Masking Support For The Linux Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Gets Back To Working On Linear Address Masking Support For The Linux Kernel

    Phoronix: Intel Gets Back To Working On Linear Address Masking Support For The Linux Kernel

    Back in December 2020 Intel's programming reference manual was updated to cover Linear Address Masking (LAM) as a future CPU feature and there was some GNU toolchain activity around LAM while not much to report on the effort since then -- until today. A revised "request for comments" has been posted on the Intel Linear Address Masking enabling for the Linux kernel that allows for using untranslated address bits of 64-bit linear addresses to be used for storing arbitrary software metadata...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    One of the great disasters was that the IBM System/360 architecture said that the top byte of addresses (32 bits) were ignored and thus could be used for anything the programmer wanted. Eventually, those bits were needed for addresses but almost all the code base had used those bits for flags. The ABI used those bits. Even the subroutine call instruction used those bits.

    Luckily for me, I stopped programming for that architecture before the days of reckoning. I don't even know the solution. Probably a mode to run old code. Probably per-job.

    It seems amazing now, but the biggest machine in Canada, in 1968, had 1 megabyte of memory. It took a while before a 16M limit pinched too hard.

    Lesson: think long and hard before stealing bits from addresses.

    I admit that it is really appealing to use those bits to simulate tag bits. And 64-bit addresses seem really long.

    Comment


    • #3
      Originally posted by Hugh View Post
      One of the great disasters was that the IBM System/360 architecture said that the top byte of addresses (32 bits) were ignored and thus could be used for anything the programmer wanted. Eventually, those bits were needed for addresses but almost all the code base had used those bits for flags. The ABI used those bits. Even the subroutine call instruction used those bits.

      Luckily for me, I stopped programming for that architecture before the days of reckoning. I don't even know the solution. Probably a mode to run old code. Probably per-job.

      It seems amazing now, but the biggest machine in Canada, in 1968, had 1 megabyte of memory. It took a while before a 16M limit pinched too hard.

      Lesson: think long and hard before stealing bits from addresses.

      I admit that it is really appealing to use those bits to simulate tag bits. And 64-bit addresses seem really long.
      Amiga and 68000/68020 not being compatible because developers began using the top byte...

      Software Failure. Press left mouse button to continue.

      Comment


      • #4
        Originally posted by Hugh View Post
        One of the great disasters was that the IBM System/360 architecture said that the top byte of addresses (32 bits) were ignored and thus could be used for anything the programmer wanted. Eventually, those bits were needed for addresses but almost all the code base had used those bits for flags. The ABI used those bits. Even the subroutine call instruction used those bits.

        Luckily for me, I stopped programming for that architecture before the days of reckoning. I don't even know the solution. Probably a mode to run old code. Probably per-job.

        It seems amazing now, but the biggest machine in Canada, in 1968, had 1 megabyte of memory. It took a while before a 16M limit pinched too hard.

        Lesson: think long and hard before stealing bits from addresses.

        I admit that it is really appealing to use those bits to simulate tag bits. And 64-bit addresses seem really long.
        Thats a problem for the year 2100, if we or x86/arm make it that far.

        Comment


        • #5
          Originally posted by Hugh View Post
          One of the great disasters was that the IBM System/360 architecture said that the top byte of addresses (32 bits) were ignored and thus could be used for anything the programmer wanted. Eventually, those bits were needed for addresses but almost all the code base had used those bits for flags. The ABI used those bits. Even the subroutine call instruction used those bits.

          Luckily for me, I stopped programming for that architecture before the days of reckoning. I don't even know the solution. Probably a mode to run old code. Probably per-job.
          My low-quality recollection is that most of the programs never got fixed, however the next-gen 370 hardware was designed to assume 24 bit addresses unless a PSW bit was set, so per-job as you surmised. I was in an IBM shop between 1979 and 1981, and when I left most of the applications were still 24 bit. What changed was that we could run more applications in parallel along with significantly more terminal users.

          I imagine some of those 24-bit apps are still running today on the latest IBM Z mainframes since they still support the IBM 360 family's 24-bit addressing and instruction set.

          I do remember being hugely impressed with how many online users we could support on what was essentially a 1 MIPS system with a few megabytes of memory. The TP monitor ("Com-plete", I think) kept all the user workspaces on hard disk and rolled them into main memory whenever a user hit the Enter key. We ran CICS as well but with COMPLETE we could support close to 10x the users on the same HW.

          What I'm struggling to remember is what our 370 system was. I remember it being a 370-155 when I joined and am pretty sure it was updated to a 158 while I was there, but I also remember the upgrade was basically moving a jumper (after paying a lot of money) and that doesn't fit with all the historical information. What I read today is that the 155 did not have an MMU ("DAT box") and that (a) upgrading a 155 required the addition of a DAT box and (b) the upgraded system was call a 155-II not a 158.
          Last edited by bridgman; 14 May 2022, 04:48 PM.
          Test signature

          Comment

          Working...
          X