Announcement

Collapse
No announcement yet.

Linux Kernel Orphans Itanium Support, Linus Torvalds Acknowledges Its Death

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux Kernel Orphans Itanium Support, Linus Torvalds Acknowledges Its Death

    Phoronix: Linux Kernel Orphans Itanium Support, Linus Torvalds Acknowledges Its Death

    Just last week I wrote about Itanium IA-64 support in Linux kernel being broken for a month during the Linux 5.11 kernel cycle. That was fixed but since then another regression came to light that had been affecting all IA-64 hardware since a patch was merged back in October. A fix for that latest regression has landed while in the process now marking the Itanium architecture as orphaned...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Omae wa mou shindeiru, Itanum-san.

    Comment


    • #3
      Hey Intel, time to switch to risc-v, you wont waste half your transistors dealing with a totally bloated obsolete ISA like x86_64, and support is not leaving anytime soon like your unloved IA64.

      Comment


      • #4
        Originally posted by rmfx View Post
        Hey Intel, time to switch to risc-v, you wont waste half your transistors dealing with a totally bloated obsolete ISA like x86_64, and support is not leaving anytime soon like your unloved IA64.
        intel could also switch to open-power ISA it would be sane to. but yes intel-ISA is not sane it is insane.
        Phantom circuit Sequence Reducer Dyslexia

        Comment


        • #5
          Originally posted by rmfx View Post
          Hey Intel, time to switch to risc-v, you wont waste half your transistors dealing with a totally bloated obsolete ISA like x86_64, and support is not leaving anytime soon like your unloved IA64.
          ISA matters, but not nearly as much as the superscalar technology to implement it (how wide the pipeline is, how high the IPC it sustains).

          If the superscalar implementation were the same, the performance gain from something like RISC V would be single digit percentage. Meh ...

          See for instance this analysis: https://scholarworks.wmich.edu/cgi/v...masters_theses

          Figures 4.1 & 4.2.

          Do you need additional resources to implement the ISA? Yes. Do those matter? Well, it depends. On a mobile chip they might, from a power consumption perspective (and we're really not in the 1900s anymore with transistor budgets ... ). On a desktop/server chip? Absolutely not.
          Last edited by vladpetric; 28 January 2021, 04:31 PM.

          Comment


          • #6
            Itanium had a two decade run
            That's being awfully generous, don't you think? The only vendor who sold Itanium systems in any quantity was HP. The rest moved very few units, and then started dropping the Itanium from their product lines starting in ~2005, not even five years after its introduction.

            The only reason HP hung on so long, is that they were so heavily invested in Itanium's development, and had put all their eggs into the Itanium basket for their lucrative high-end server products.

            Their final refresh "Kittson" was not even a real refresh. Launched *five years* after the previous generation "Poulson", it offered literally no changes whatsoever. The Poulson 9520 is identical to the Kittson 9720. The Poulson 9540 is identical to the Kittson 9740. Not even a clock speed bump. Five years of zero-development. Nothing but a name change!

            The higher end 9750 and 9760 SKU's differed from the previous gen only by a 133 Mhz clock speed increase, a 5% clock increase after 5 years. Hardly worth mentioning, much less purchasing. The saddest part of it all, is that the superior DEC Alpha and PA-RISC architectures both died to bring Itanium to market. Good riddance to this trash.
            Last edited by torsionbar28; 28 January 2021, 04:38 PM.

            Comment


            • #7
              Originally posted by torsionbar28 View Post
              That's being awfully generous, don't you think? The only vendor who sold Itanium systems in any quantity was HP. The rest moved very few units, and then started dropping the Itanium from their product lines starting in ~2005, not even five years after its introduction.

              The only reason HP hung on so long, is that they were so heavily invested in Itanium's development, and had put all their eggs into the Itanium basket for their lucrative high-end server products.

              Their final refresh "Kittson" was not even a real refresh. Launched *five years* after the previous generation "Poulson", it offered literally no changes whatsoever. The Poulson 9520 is identical to the Kittson 9720. The Poulson 9540 is identical to the Kittson 9740. Not even a clock speed bump. Five years of zero-development. Nothing but a name change!

              The higher end 9750 and 9760 SKU's differed from the previous gen only by a 133 Mhz clock speed increase, a 5% clock increase after 5 years. Hardly worth mentioning, much less purchasing. The saddest part of it all, is that the superior DEC Alpha and PA-RISC architectures both died to bring Itanium to market. Good riddance to this trash.
              Hey! Larger version numbers are always better :-p

              Comment


              • #8
                Originally posted by vladpetric View Post

                ISA matters, but not nearly as much as the superscalar technology to implement it (how wide the pipeline is, how high the IPC it sustains).

                If the superscalar implementation were the same, the performance gain from something like RISC V would be single digit percentage. Meh ...

                See for instance this analysis: https://scholarworks.wmich.edu/cgi/v...masters_theses

                Figures 4.1 & 4.2.

                Do you need additional resources to implement the ISA? Yes. Do those matter? Well, it depends. On a mobile chip they might, from a power consumption perspective (and we're really not in the 1900s anymore with transistor budgets ... ). On a desktop/server chip? Absolutely not.
                good luck building a wider issue x86 decoder than current Intel and AMD designs, and all the fancy superscalar speculation comes after than. There is a reason Apple could pull the M1 off: https://www.youtube.com/watch?v=cAZ7EWUw3qo and a major reason is: not x86

                Comment


                • #9
                  Originally posted by rene View Post

                  good luck building a wider issue x86 decoder than current Intel and AMD designs, and all the fancy superscalar speculation comes after than. There is a reason Apple could pull the M1 off: https://www.youtube.com/watch?v=cAZ7EWUw3qo and a major reason is: not x86
                  The video you're quoting seems to say that Apple M1 IPC is really high. No disagreements there. Then there's a lot of unsubstantiated opinions. What a waste of time.

                  Is it really annoying to decode instructions when they are arbitrary size, in bytes? Yes. Is it doable though? Yes. (and I've talked to designers of x86 as well).

                  I think you're also forgetting about the instruction trace caches in modern designs, which for a lot of codes get decoding off the critical path.

                  I think you're attempting a Motte-and-Bailey argument. I don't have a problem with you claiming that Apple M1 IPC is high. But you don't really show any evidence for your main, rather incendiary thesis.

                  BTW, it's a lot harder to branch predict 3 instruction basic blocks that are not contiguous (essentially, you have two taken branches, and three blocks of instructions) than it is decode x86. Yes, you do get a bit of a O(bytes**2) factor there, but you know what ... we have plenty of transistors.

                  PA RISC BTW was started by Jim Keller. He basically designs super fast processors, for all the major ISAs - check the list of what he did (including the final Alpha, and the Ryzen) https://en.wikipedia.org/wiki/Jim_Keller_(engineer)
                  Last edited by vladpetric; 28 January 2021, 06:56 PM.

                  Comment


                  • #10
                    Originally posted by vladpetric View Post

                    The video you're quoting seems to say that Apple M1 IPC is really high. No disagreements there. Then there's a lot of unsubstantiated opinions. What a waste of time.

                    Is it really annoying to decode instructions when they are arbitrary size, in bytes? Yes. Is it doable though? Yes. (and I've talked to designers of x86 as well).

                    I think you're also forgetting about the instruction trace caches in modern designs, which for a lot of codes get decoding off the critical path.

                    I think you're attempting a Motte-and-Bailey argument. I don't have a problem with you claiming that Apple M1 IPC is high. But you don't really show any evidence for your main, rather incendiary thesis.

                    BTW, it's a lot harder to branch predict 3 instruction basic blocks that are not contiguous (essentially, you have two taken branches, and three blocks of instructions) than it is decode x86. Yes, you do get a bit of a O(bytes**2) factor there, but you know what ... we have plenty of transistors.

                    PA RISC BTW was started by Jim Keller. He basically designs super fast processors, for all the major ISAs - check the list of what he did (including the final Alpha, and the Ryzen) https://en.wikipedia.org/wiki/Jim_Keller_(engineer)
                    Jim Keller was not the brains or the lead behind AMD Ryzen, that would be Mike Clark who was the brains behind the Zen micro-architecture and AMD's comeback on the CPU side.
                    Last edited by Rallos Zek; 28 January 2021, 10:28 PM.

                    Comment

                    Working...
                    X