Announcement

Collapse
No announcement yet.

Itanium IA-64 Was Busted In The Upstream, Default Linux Kernel Build The Past Month

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Itanium IA-64 Was Busted In The Upstream, Default Linux Kernel Build The Past Month

    Phoronix: Itanium IA-64 Was Busted In The Upstream, Default Linux Kernel Build The Past Month

    While Intel formally discontinued the Itanium processors just under two years ago, the Linux software support for IA-64 continues. However, as a possible sign of the times, the Linux 5.11 kernel build for it has been broken the past month...

    http://www.phoronix.com/scan.php?pag...ux-5.11-Broken

  • #2
    How long before this terrible architecture is removed from the kernel source? It's dead, Jim.

    Comment


    • #3
      Originally posted by Imroy View Post
      How long before this terrible architecture is removed from the kernel source? It's dead, Jim.
      I think HP still had some systems with it. It was an interesting architecture when introduced, its called EPIC and is like VLIW, but it requires much from the compiler. The architecture was criticized for being slower than x86 when emulating the x86 architecture which was kind of unfair.
      I don't know if it was just a bad idea, or if EPIC/VLIC has some merit and might be a good idea, either way its pretty dead even if it was interesting.

      Comment


      • #4
        Originally posted by Imroy View Post
        How long before this terrible architecture is removed from the kernel source? It's dead, Jim.
        I would suggest this assessment might be a bit unfair. The architecture is actually incredibly elegant, very well implemented (in hardware) and amazingly flexible for future development.

        It had several key failures though:
        1. Intel ensure that only Intel could produce it, they never license it to anyone else... they thought this would give them market dominance... but in the end it eliminated market penetration.
        2. Turns out its borderline impossible to write an effective compiler. The whole architecture turns much of commonly accepted computer engineering paradigms on their head... it moved all the scheduling, parallelism, and hardware complexity into the compiler... genius idea for the hardware engineers, it theoretically made it cheaper to produce. However it made the compiler severely more complicated to produce... and as several architectures in history have shown... the very best most amazing CPU turns out to be useless if you cant compile software for it!
        Personally I think point 1 may have been the key one. If they could have made the market excited about it and got more CPU designers and manufacturers on board, it would have spread the risk and development of the compilers would have perhaps progressed further!

        This is all bonus work for the concepts of the RISC-V group... maybe some day we will see an Unobtanium-V group

        Comment


        • #5
          Originally posted by zexelon View Post

          I would suggest this assessment might be a bit unfair. The architecture is actually incredibly elegant, very well implemented (in hardware) and amazingly flexible for future development.

          It had several key failures though:
          1. Intel ensure that only Intel could produce it, they never license it to anyone else... they thought this would give them market dominance... but in the end it eliminated market penetration.
          2. Turns out its borderline impossible to write an effective compiler. The whole architecture turns much of commonly accepted computer engineering paradigms on their head... it moved all the scheduling, parallelism, and hardware complexity into the compiler... genius idea for the hardware engineers, it theoretically made it cheaper to produce. However it made the compiler severely more complicated to produce... and as several architectures in history have shown... the very best most amazing CPU turns out to be useless if you cant compile software for it!
          Personally I think point 1 may have been the key one. If they could have made the market excited about it and got more CPU designers and manufacturers on board, it would have spread the risk and development of the compilers would have perhaps progressed further!

          This is all bonus work for the concepts of the RISC-V group... maybe some day we will see an Unobtanium-V group
          I could be wrong, but IIRC, part of Itanium's issues (besides the ones that you mentioned here) is that the architecture never became really popular outside of the datacenter and the existing AMD 64 bit architecture (as well as the later Intel) was much more forgiving and could do much better with x86 code. I could be wrong, but I think Itanium was Intel's first true 64 bit hardware - but do not quote me on that.
          GOD is REAL unless declared as an INTEGER.

          Comment


          • #6
            f0rmat I believe you are correct, as far as I know the Itanium was Intel's first mass market 64bit CPU. It was supposed to bring in a complete replacement of the x86 architecture as Intel saw severe limitations to the x86 and that it would turn into a giant ball of band-aids if they continued (i.e. where we are today...).

            Itanium was a cleansheet design from scratch. The Itanium is x64... but it is not x86_64. It was initially x64 to the core (that just sounds cool), the key take-away is that it was not an extension of x86 but rather completely new full width 64 bit, it could not operate with 32 bit code (at least not initially). This is a key note, any x86 instruction would have to be emulated. As anyone who has tried to run an complex architecture in Quemu on x86 without acceleration (i.e. PowerPC) will know that its very slow and not trivial. As a result the Itanium just sucked hard at running x86 in its initial implementation. Later versions of Itanium did implement x86 acceleration support but it was never as seamless or performant as desired.

            From a marketing perspective, AMD did far better, they released a fully backwards compatible extension to x86 with their AMD64 (x86_64) extension (i.e. they chose to add band-aids to the ball), this was a far less expensive proposition than a custom architecture, and as Intel had no desire to open Itanium to other groups to share in the cost/risk of the new architecture, AMD was able to pull the rug out from under it. AMD64 was a far more open and accessible extension to x86 extension that AMD still dominated.

            Comment


            • #7
              Originally posted by uid313 View Post

              I think HP still had some systems with it. It was an interesting architecture when introduced, its called EPIC and is like VLIW, but it requires much from the compiler. The architecture was criticized for being slower than x86 when emulating the x86 architecture which was kind of unfair.
              I don't know if it was just a bad idea, or if EPIC/VLIC has some merit and might be a good idea, either way its pretty dead even if it was interesting.
              Itanium failed because the compiler, at compile time, can't schedule the cache hierarchy. It can do a passable job at other things, but caches ... nope. Way too dynamic, even if you have decent profiling data.

              Up until the early 2000s, it was ok to have an in-order processor, which simply blocked whenever there was a cache miss. EPIC/VLIW is also an in-order processor.

              Today, it's not - dynamic scheduling to lower the cost of cache misses all the way to memory (end-to-end latency of a DRAM is about ~100ns, when a CPU cycle with a processor running at 5GHz is a fifth of a nanosecond) is a must.

              Die, Itanium, Die!

              Comment


              • #8
                Originally posted by zexelon View Post
                f0rmat I believe you are correct, as far as I know the Itanium was Intel's first mass market 64bit CPU. It was supposed to bring in a complete replacement of the x86 architecture as Intel saw severe limitations to the x86 and that it would turn into a giant ball of band-aids if they continued (i.e. where we are today...).

                Itanium was a cleansheet design from scratch. The Itanium is x64... but it is not x86_64. It was initially x64 to the core (that just sounds cool), the key take-away is that it was not an extension of x86 but rather completely new full width 64 bit, it could not operate with 32 bit code (at least not initially). This is a key note, any x86 instruction would have to be emulated. As anyone who has tried to run an complex architecture in Quemu on x86 without acceleration (i.e. PowerPC) will know that its very slow and not trivial. As a result the Itanium just sucked hard at running x86 in its initial implementation. Later versions of Itanium did implement x86 acceleration support but it was never as seamless or performant as desired.

                From a marketing perspective, AMD did far better, they released a fully backwards compatible extension to x86 with their AMD64 (x86_64) extension (i.e. they chose to add band-aids to the ball), this was a far less expensive proposition than a custom architecture, and as Intel had no desire to open Itanium to other groups to share in the cost/risk of the new architecture, AMD was able to pull the rug out from under it. AMD64 was a far more open and accessible extension to x86 extension that AMD still dominated.
                That is what I remember, too. What is fascinating to me is that at the time Intel introduced the Itanium, the vastly overwhelming majority of code out there was x86 with some 16 bit thrown in there. Why Intel thought that everybody would drop all of there x86 code and transfer to Itanium was a mystery to me at the time. Especially since at the time, AMD was providing some serious competition to them. Not only had they just introduced the first true 64 bit processor, they had also just recently had the first processor that broke the 1 GHz threshold.
                GOD is REAL unless declared as an INTEGER.

                Comment


                • #9
                  Originally posted by vladpetric View Post

                  Itanium failed because the compiler, at compile time, can't schedule the cache hierarchy. It can do a passable job at other things, but caches ... nope. Way too dynamic, even if you have decent profiling data.

                  Up until the early 2000s, it was ok to have an in-order processor, which simply blocked whenever there was a cache miss. EPIC/VLIW is also an in-order processor.

                  Today, it's not - dynamic scheduling to lower the cost of cache misses all the way to memory (end-to-end latency of a DRAM is about ~100ns, when a CPU cycle with a processor running at 5GHz is a fifth of a nanosecond) is a must.
                  This. A VLIW style architecture might work well for a DSP where you can carefully tune the code for the exact workload, but for a general purpose architecture it's a massive failure. Compilers were never able to efficiently pack instructions into bundles for general purpose code, leading to lots of NOPS and thus wasted instruction bandwidth.

                  And once you go to OoO HW, that instruction encoding with bundles etc. is just a waste.

                  Comment


                  • #10
                    Originally posted by jabl View Post

                    This. A VLIW style architecture might work well for a DSP where you can carefully tune the code for the exact workload, but for a general purpose architecture it's a massive failure. Compilers were never able to efficiently pack instructions into bundles for general purpose code, leading to lots of NOPS and thus wasted instruction bandwidth.

                    And once you go to OoO HW, that instruction encoding with bundles etc. is just a waste.
                    Yeah, for DSP the data flows are highly predictable - stream in, stream out, keep around some fixed amount of state.

                    Comment

                    Working...
                    X