Announcement

Collapse
No announcement yet.

Intel Itanium IA-64 Support To Be Deprecated By GCC 10, Planned Removal In GCC 11

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Bsdisbetter
    replied
    Originally posted by cjcox View Post
    I smell HP landfill!

    It's sad, but there's still a lot of IA64 out there, and now it's destined for the poop pile. One of the things I love about Linux is how it can keep really old equipment (especially equipment from a bad vendor) out of our landfills (yes, I supposed a good scrapper might be able to get something out of them).
    You'd be surprised. More corporate pcs go to landfill than high end servers, that's just a matter of volume. On these systems with high costs the propensity is for corporations to keep them running for decades (and decades). If you pay HP enough money they'll provide support for dinosaur poo, if you need it.

    Leave a comment:


  • Bsdisbetter
    replied
    Originally posted by edwaleni View Post
    Most customers are moving legacy apps on HP-UX IA-64 to containers. Then they can let the clock run out on the hardware as HP allows. This gives them time to develop the replacements on the platform of their choice. Eazy-peasy.
    That might be true for hp/ux but not openvms. They have a long roadmap (traditionally) of support, so corporations have time. Similarly, vms software is migrating the code to x86. Some corporations are still running vax and/or alpha clusters - 20+ years after their prime.

    Leave a comment:


  • Bsdisbetter
    replied
    Originally posted by bridgman View Post

    I have a different take on this. Itanium was a good idea when it was conceived (before anyone realized just how far superscalar x86 designs could go) but had two major things going against it - one predictable, the other not so much:

    - poor performance when running x86 code (this one was IMO predictable and hurt short term adoption, particularly after AMD64)
    - superscalar x86 implementations became incredibly wide and capable (I suspect this surprised even the teams working on them)

    Modern x86 designs are mind-blowingly complex and clever. Not only are they able to pick 5-10 operations out of a single instruction stream to execute in parallel (at peak) but they pick through a 100+ instruction window in real time to accomplish that.

    A lot of superscalar CPU technology builds on Robert Tomasulo's work at IBM in the 1960's on the 360/91... I never had a chance to meet him (big regret) but I imagine even he would have been surprised how far his ideas were able to be extended.

    One of the more interesting things about x86 history is how out-of-order execution showed up more or less simultaneously in designs from Cyrix, Intel and AMD during late 1995 and early 1996... 30-ish years after it appeared in mainframes. Wikipedia says that Cyrix was first to market, which is pretty impressive.
    The x86 emulator was a joke. True.

    Leave a comment:


  • Bsdisbetter
    replied
    Originally posted by jacob View Post
    The Itanic will be remembered as one of the worst ideas in CPU design history, with a botched implementation to match.
    That would be the same non-vulnerable intel chip series, would it?

    Leave a comment:


  • bridgman
    replied
    Originally posted by jacob View Post
    A few good ideas? I can't even think of one. Except maybe some stuff like SMT, which other CPUs have too and which is basically contrary to the very concept of the Itanium. They implemented it as a band-aid trying to make up for a fundamentally terrible design.
    I have a different take on this. Itanium was a good idea when it was conceived (before anyone realized just how far superscalar x86 designs could go) but had two major things going against it - one predictable, the other not so much:

    - poor performance when running x86 code (this one was IMO predictable and hurt short term adoption, particularly after AMD64)
    - superscalar x86 implementations became incredibly wide and capable (I suspect this surprised even the teams working on them)

    There were also questions at the time IIRC about whether the compilers were really doing a sufficiently good job of extracting parallelism from the code. Not sure how that worked out but my impression was that there was simply not enough VLIW hardware on the market to build a critical mass of compiler technology to support it.

    Modern x86 designs are mind-blowingly complex and clever. Not only are they able to pick 5-10 operations out of a single instruction stream to execute in parallel (at peak) but they pick through a 100+ instruction window in real time to accomplish that.

    A lot of superscalar CPU technology builds on Robert Tomasulo's work at IBM in the 1960's on the 360/91... I never had a chance to meet him (big regret) but I imagine even he would have been surprised how far his ideas were able to be extended.

    One of the more interesting things about x86 history is how out-of-order execution showed up more or less simultaneously in designs from Cyrix, Intel and AMD during late 1995 and early 1996... 30-ish years after it appeared in mainframes. Wikipedia says that Cyrix was first to market, which is pretty impressive.
    Last edited by bridgman; 15 June 2019, 05:37 AM.

    Leave a comment:


  • Guest
    Guest replied
    Originally posted by milkylainen View Post
    Dead as a dodo. Intels best effort to sideline it's own x86 show, now defunct.
    But I'm sure x86 will die any day now...
    I don't know, x86 is good enough for airgapped gaming devices. Make it into something that people will only use in their PlayStation or something.

    Leave a comment:


  • HyperDrive
    replied
    Originally posted by carewolf View Post

    VLIW seemed like a good idea in the late 1980s if you never imagined out-of-order superscalar processors, but by the time Itanimum was designed, we already had superscalar processors(*), and when comparing VLIW's statically scheduled "super-scaling" to dynamic super-scaling, static scheduling is only better in power consumption, but failed terribly in performace, and Itanium wasn't meant to be low power / low performance processors. Transmeta made a better gamble there, but also failed, again mostly due to performance.

    (*) Not only did we have it, with it X86 had started killing off every other otherwise superior architecture that didn't have it.
    I'm not entirely convinced VLIW architectures are dead. I'm yet to find any show-stopping flaws in the Mill, for example (though I still have some doubts about the spiller section, I'd love to see more details about it).

    Leave a comment:


  • torsionbar28
    replied
    Originally posted by jacob View Post
    Meanwhile AMD feared that if the Itanic caught up, it would indeed lose its market. They created AMD64 as a desperate attempt to stay relevant by offering the first affordable 64-bit CPU for the masses. So Intel had its Itanic and AMD had a design that offered excellent performance at a fraction of Intel's cost and, at the same time, had basically no downsides as in the absolutely worst case, it would just run 32-bit software like any ordinary x86 processor with no penalty. The rest, as they say, is history.
    You must not have been around back then. AMD64 was anything but a 'desperate attempt', it was AMD's brilliant response to intel's failed market strategy. AMD played the chess pieces like a grand master while intel (and HP) threw good money after bad, on a failed architecture. Itanium was so bad, it cost $thousands (for even the cheapest part), yet underperformed a desktop Pentium on launch day, and never was able to catch up. With each iteration, intel claimed the next one would be better. And year after year, it never was. HP went whole hog into Itanium as a PA-RISC replacement, so their sizable HP-UX market share depended on Itanium's success... which never came. Believe me, I know, I worked for HP specifically with this stuff, for over a decade during the transition from PA-RISC to IA-64.

    If anything, Itanium was intel's desperate attempt to displace AMD, as AMD was producing faster and cheaper 32 bit x86 chips than intel at the time, and AMD was winning the Ghz race with their excellent Athlon chips. Remember AMD's "Super socket 7" chips? While intel topped out at 233 Mhz with the Pentium MMX, AMD was up to 550 Mhz with the K6-3. Of course the Athlon64 and Opteron with AMD64 instruction set only furthered AMD's lead. It wasn't until ~2006 when intel finally dumped their garbage Netburst P4 architecture and replaced with the Core Duo, and then the Core 2 Duo which cloned the AMD64 instructions. If you recall, Netburst P4's were so terrible, they regularly overheated, couldn't compete with Athlon on performance, and only scaled to 3.8 Ghz max. intel claimed that Netburst would scale to 10 Ghz. Yes, intel actually said that, TEN gigahertz! So not only was intel failing hard on IA-64, but they also failed hard in their IA-32 department too, from the late 1990's through ~2006.
    Last edited by torsionbar28; 14 June 2019, 11:31 AM.

    Leave a comment:


  • torsionbar28
    replied
    "Considering the GCC compiler is used to compile the Linux kernel and IA-64 doesn't enjoy coverage from other compilers like Clang able to build the Linux kernel, it will effectively mean the end of the road for new Linux support moving forward."

    Pretty sure when RHEL and SLES stopped supporting IA-64 a while back, that marked the end of the road for Linux on IA-64. This is enterprise hardware, nobody is running Gentoo on these things lol.

    Leave a comment:


  • carewolf
    replied
    Originally posted by jacob View Post

    I don't remember all the details but it was actually pretty simple. The WLIW/EPIC concept originated at HP. Someone must have smoked something particularly heavy and thought that it would ever be a good idea
    VLIW seemed like a good idea in the late 1980s if you never imagined out-of-order superscalar processors, but by the time Itanimum was designed, we already had superscalar processors(*), and when comparing VLIW's statically scheduled "super-scaling" to dynamic super-scaling, static scheduling is only better in power consumption, but failed terribly in performace, and Itanium wasn't meant to be low power / low performance processors. Transmeta made a better gamble there, but also failed, again mostly due to performance.

    (*) Not only did we have it, with it X86 had started killing off every other otherwise superior architecture that didn't have it.
    Last edited by carewolf; 14 June 2019, 10:25 AM.

    Leave a comment:

Working...
X