Announcement

Collapse
No announcement yet.

Itanium IA-64 Was Busted In The Upstream, Default Linux Kernel Build The Past Month

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Taken as a whole, I would say Itanium was a very impressive engineering undertaking and in some ways it was quite successful. It was a market failure yes, but so are many architectures. I would not fully discount the tech in the Itanium. I am sure that in the probably not to distant future, pieces of it will be resurrected. Tech developed in GPU's is solving a lot of the issues with with Itanium. The one issue that can not be fixed is backwards compatibility.

    Probably pieces of Itanium will eventually be unknowingly included in Xeon Phi or some future Intel "accessory" processor or GPU.

    Comment


    • #12
      Originally posted by zexelon View Post
      Taken as a whole, I would say Itanium was a very impressive engineering undertaking and in some ways it was quite successful. It was a market failure yes, but so are many architectures. I would not fully discount the tech in the Itanium. I am sure that in the probably not to distant future, pieces of it will be resurrected. Tech developed in GPU's is solving a lot of the issues with with Itanium. The one issue that can not be fixed is backwards compatibility.

      Probably pieces of Itanium will eventually be unknowingly included in Xeon Phi or some future Intel "accessory" processor or GPU.
      Massive failure is more like it - see my other answer as to why.

      Comment


      • #13
        vladpetric For a massive failure it did much much better than a large swath of other processors. In 2008 for example it was one of the top five most deployed processors[1]!

        It was a market failure in the end yes, fully agreed. Massive failure I would disagree with. It was also a technical success, it performed exactly what it was designed to do and actually in later iterations was able to perform extremely well when running IA64 code. It was a technical success and was fully supported up until 2019 with support winding down now. The Itanium compiler requirements have lead to significant advances in compiler design and implementation. Computing as a whole is actually quite a bit further ahead due to the Itanium... even though it as a whole was a market failure. What can most significantly be learned from this is that such complex and risky projects like designing a completely new CPU architecture should never be undertaken by a single CPU company. Also, compatibility is the single most significant requirement of any new architecture if it wish to be used by any masses.

        ARM for example is slowly making in roads in mainstream, but this based on its compatibility with a huge historical base of software developed outside of mainstream.

        Citations:
        1. https://web.archive.org/web/20160303...8-story03.html

        Comment


        • #14
          Originally posted by zexelon View Post
          vladpetric For a massive failure it did much much better than a large swath of other processors. In 2008 for example it was one of the top five most deployed processors[1]!

          It was a market failure in the end yes, fully agreed. Massive failure I would disagree with. It was also a technical success, it performed exactly what it was designed to do and actually in later iterations was able to perform extremely well when running IA64 code. It was a technical success and was fully supported up until 2019 with support winding down now. The Itanium compiler requirements have lead to significant advances in compiler design and implementation. Computing as a whole is actually quite a bit further ahead due to the Itanium... even though it as a whole was a market failure. What can most significantly be learned from this is that such complex and risky projects like designing a completely new CPU architecture should never be undertaken by a single CPU company. Also, compatibility is the single most significant requirement of any new architecture if it wish to be used by any masses.

          ARM for example is slowly making in roads in mainstream, but this based on its compatibility with a huge historical base of software developed outside of mainstream.

          Citations:
          1. https://web.archive.org/web/20160303...8-story03.html
          You're completely ignoring my cache argument.

          Were there sales? Yes, but you need to keep in mind that:

          1. Itanium had a monopoly in some super narrow sense - you wanted to do HPUX, that was your only choice.

          2. Professional bullshitters ... I mean business people ... can convince other business people to buy their stuff, even when it doesn't work well.

          The sales were in fact always bad:

          https://www.extremetech.com/computin...itanium-family

          As for your claims that it performed well ... that's puffery.

          The difference between engineering and politics is that mother natures doesn't give a FF about bs.
          Last edited by vladpetric; 18 January 2021, 12:45 PM.

          Comment


          • #15
            Originally posted by vladpetric View Post

            You're completely ignoring my cache argument.

            Were there sales? Yes, but you need to keep in mind that:

            1. Itanium had a monopoly in some super narrow sense - you wanted to do HPUX, that was your only choice.

            2. Professional bullshitters ... I mean business people ... can convince other business people to buy their stuff, even when it doesn't work well.

            The sales were in fact always bad:

            https://www.extremetech.com/computin...itanium-family

            As for your claims that it performed well ... that's puffery.

            The difference between engineering and politics is that mother natures doesn't give a FF about bs.
            I apologies if I have ignored your specific cache argument. It was not an intent to ignore this, but rather address the larger picture rather then a specific failure. Itanium is full of many specific failures... but then so was the Netburst architecture... Netburst had many many specific failures and yet was financially a huge success for Intel. Netburst dominated until the first AMD64 cores and even then AMD never really unseated Intel in the market, but they did get far enough to force Intels hand when it came to 64 bit processing.

            The cool thing about the cache argument is that it was able to be solved by better compilers, something that most architectures benefit from now! The cache argument is also not unique to Itanium but rather something that all CPU engineers must face. Cache design and its ramifications covers every aspect of CPU design from cost to produce to performance and there are tradeoffs everywhere.

            The cache system in Itanium was able to be recovered by better compiler design as it matured (given, this happened far to late to save the design).

            Reality must always take precedence over public relations for mother nature is not easily fooled. - Richard Feynman

            Comment


            • #16
              Originally posted by zexelon View Post

              I apologies if I have ignored your specific cache argument. It was not an intent to ignore this, but rather address the larger picture rather then a specific failure. Itanium is full of many specific failures... but then so was the Netburst architecture... Netburst had many many specific failures and yet was financially a huge success for Intel. Netburst dominated until the first AMD64 cores and even then AMD never really unseated Intel in the market, but they did get far enough to force Intels hand when it came to 64 bit processing.

              The cool thing about the cache argument is that it was able to be solved by better compilers, something that most architectures benefit from now! The cache argument is also not unique to Itanium but rather something that all CPU engineers must face. Cache design and its ramifications covers every aspect of CPU design from cost to produce to performance and there are tradeoffs everywhere.

              The cache system in Itanium was able to be recovered by better compiler design as it matured (given, this happened far to late to save the design).

              Reality must always take precedence over public relations for mother nature is not easily fooled. - Richard Feynman
              I'm sorry, it wasn't. That was an ideological argument of the compiler writers. They really thought that by pouring more resources in the compilers, that it could do that (improve caching). But in the end, nothing came out of it. If you say that it was able to do this wonderful things, can you point to a paper showing the performance improvements?

              (FWIW, I knew people who worked on the Itanium compiler team at Intel).

              Comment


              • #17

                Originally posted by vladpetric View Post

                I'm sorry, it wasn't. That was an ideological argument of the compiler writers. They really thought that by pouring more resources in the compilers, that it could do that (improve caching). But in the end, nothing came out of it. If you say that it was able to do this wonderful things, can you point to a paper showing the performance improvements?

                (FWIW, I knew people who worked on the Itanium compiler team at Intel).
                Well we shall disagree on this point, but its not worth dying on a hill for, the reality is the Itanium was a failure in the market... the failure modes were complex and covered the full project from hardware to software. But the Netburst also shared a number of those failure modes and yet was a financially huge success, so I would point to the IA64 being probably the single most significant failure point. In the end AMD brought another band-aid to the big ball of band-aids that is x86 and made x86_64.

                Comment


                • #18
                  Originally posted by zexelon View Post


                  Well we shall disagree on this point, but its not worth dying on a hill for, the reality is the Itanium was a failure in the market... the failure modes were complex and covered the full project from hardware to software. But the Netburst also shared a number of those failure modes and yet was a financially huge success, so I would point to the IA64 being probably the single most significant failure point. In the end AMD brought another band-aid to the big ball of band-aids that is x86 and made x86_64.
                  Having a discussion without any real consequences is not dying on a hill, AFAICT . I apologize if I sounded too harsh.

                  Comment


                  • #19
                    Originally posted by zexelon View Post
                    But the Netburst also shared a number of those failure modes
                    Well, no, not the same failure modes. In some sense Netburst was actually an innovative microarchitecture, in the sense that the trace cache idea was new and looked promising, and netburst was the first full-scale experiment of that idea. Well, as we all know it turned out to not be as good in practice as hoped for.

                    But Itanium? What was innovative there? A bunch of ideas that was known to be not as good as OoOE for exposing ILP, glued together with magical thinking that an imminent breakthrough in compilers would be able to fix it?

                    Comment


                    • #20
                      Originally posted by zexelon View Post

                      I would suggest this assessment might be a bit unfair. The architecture is actually incredibly elegant, very well implemented (in hardware) and amazingly flexible for future development.

                      It had several key failures though:
                      1. Intel ensure that only Intel could produce it, they never license it to anyone else... they thought this would give them market dominance... but in the end it eliminated market penetration.
                      2. Turns out its borderline impossible to write an effective compiler. The whole architecture turns much of commonly accepted computer engineering paradigms on their head... it moved all the scheduling, parallelism, and hardware complexity into the compiler... genius idea for the hardware engineers, it theoretically made it cheaper to produce. However it made the compiler severely more complicated to produce... and as several architectures in history have shown... the very best most amazing CPU turns out to be useless if you cant compile software for it!
                      Personally I think point 1 may have been the key one. If they could have made the market excited about it and got more CPU designers and manufacturers on board, it would have spread the risk and development of the compilers would have perhaps progressed further!

                      This is all bonus work for the concepts of the RISC-V group... maybe some day we will see an Unobtanium-V group
                      1. This wasn't a big as an issue that they entirely misread the market. Intel wanted to consilidate all of the *NIX or RISC (alpha, sparc, mips,power) at a time everyone was starting to more to commodity hardware. It was really only later they started marketing it as a x86 successor after it was clear which way the market was going.

                      2. The compiler wasn't really the problem. The problem was the hardware was a poor match to the code you wanted to run. They did great on computation heavy scientific code, but your more traditional general-purpose control-flow centric code did not. And many of the instruction bundles simply didn't do much. exacerbating the poor performance on cache miss. Additionally they hamstrung themselves some by not giving the full timing information to compiler, as they wanted to get around a forced re-compile on newer chips.

                      3. It was also rushed out the door, and never completed as the developers vision. One of the main designers died during the process and intel was under great pressure to answer competition the high end. The extensive speculation, with support for instruction predicates and NaT bits, and the rotating registers made loop pipelining a lot easier and straightforward. However it still required expensive preamble and post-amble so the usefulness was limited. If the engineers had another 3-5 years pre-release to really tackle and rethink some of it's drawback you could have ended up with something pretty decent.

                      If anything the proposed Mill architecture is the spiritual successor of Itanium and Transmeta, with a VLIW-like arrangement aimed at handling general purpose code.

                      [wrong]Risc-V does follow with predicated instructions[/wrong], and even x86 got CMOV that can let the compiler eliminate some branches in code.
                      Last edited by WorBlux; 18 January 2021, 09:50 PM. Reason: marked incorrect statement

                      Comment

                      Working...
                      X