Dear lord, it's fascinating how the FUD about Itanium from decades ago is still being spouted ha ha.
The compilers were fine, from the get go, and so was the architecture once Itanium2 was released. Which was one of the top performer CPUs of its generation BTW.
IA65 wasn't particularly VLIW, as each bundle was just 3 instructions long on average. Besides VLIW code analysis wasn't a "mystery" and had been actively worked on since the 70s. Trace compilers, even by the late 80s, were fairly good at figuring out static superscalar schedulings. IA64 provided huge rotating arch register files for the compiler.
Besides Itanium2 also had plenty of dynamic execution support. I just laugh at some of the discussions that involve itanium where people try to "solve" problems they don't understand. Thinking that somehow some of the top architecture/compiler teams in the industry (Intel/HP) somehow missed the basics of computer organization and the intro to compiler classes.
What sunk itanium wasn't performance nor compilers. What did the architecture in was poor down scalability due to high power consumption introduced with predication. Poor backwards compatibility with the largest software library in the world then (x86), and high cost due to lack of accessibility of the same economies of scale as x86.
Once AMD figured out how to extend x86 to 64bit with an architecture that kept 32bit performance as well, IA64 had little value proposition in comparison.
But from an architectural performance standpoint, on its own, Itanium wasn't a bad architecture since it managed to either match or outperform its contemporary out-of-order high performance RISC designs on its original use cases. It just that when IA64 originally started, in the early 90s, neither intel nor HP were expecting x86 to be so scalable as to have also evolved into an out-of-order high performance 64bit architecture by the beginning of the century. Props to AMD's arch team for that.
The compilers were fine, from the get go, and so was the architecture once Itanium2 was released. Which was one of the top performer CPUs of its generation BTW.
IA65 wasn't particularly VLIW, as each bundle was just 3 instructions long on average. Besides VLIW code analysis wasn't a "mystery" and had been actively worked on since the 70s. Trace compilers, even by the late 80s, were fairly good at figuring out static superscalar schedulings. IA64 provided huge rotating arch register files for the compiler.
Besides Itanium2 also had plenty of dynamic execution support. I just laugh at some of the discussions that involve itanium where people try to "solve" problems they don't understand. Thinking that somehow some of the top architecture/compiler teams in the industry (Intel/HP) somehow missed the basics of computer organization and the intro to compiler classes.
What sunk itanium wasn't performance nor compilers. What did the architecture in was poor down scalability due to high power consumption introduced with predication. Poor backwards compatibility with the largest software library in the world then (x86), and high cost due to lack of accessibility of the same economies of scale as x86.
Once AMD figured out how to extend x86 to 64bit with an architecture that kept 32bit performance as well, IA64 had little value proposition in comparison.
But from an architectural performance standpoint, on its own, Itanium wasn't a bad architecture since it managed to either match or outperform its contemporary out-of-order high performance RISC designs on its original use cases. It just that when IA64 originally started, in the early 90s, neither intel nor HP were expecting x86 to be so scalable as to have also evolved into an out-of-order high performance 64bit architecture by the beginning of the century. Props to AMD's arch team for that.
Comment