Announcement

Collapse
No announcement yet.

QEMU 2.11-RC1 Released: Drops IA64, Adds OpenRISC SMP & More

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • s_j_newbury
    replied
    Originally posted by jacob View Post

    There is no way compiler tech could ever be where Itanium needed it, because scheduling instructions for such an ISA im the general case is equivalent to thw Turing machine halting problem. It was a horrible idea from the start and frankly I'm amazed that someone manages to get HP and Intel to take it seriously at all.

    Now don't get me wrong: an explicitly parallel core is GOOD(tm), but the parallel dispatch must occur dynamically, not at compile time. All modern CPUs do that and it's called hyperthreading. An instruction queue is continuously filled from two (or more) *INDEPENDENT* program threads and, after translation, these instructions are fed to a two-way (or more) internal EPIC/VLIW core. Because unlike the compiler, the internal dispatcher has a runtime view of the pipelines and current latencies, it can send individual instructions on the fly as needed. That's the right way to do it.

    As for the Alpha, it is important to remember that there waa nothing magical about it. It was a stock RISC ISA, just like MIPS, SPARC etc. What made it so fast had nothing to do with its design, it came from the fact that the core was largely drawn by hand and hand-optimised to the last degree. That and a "dumb" (and thus quick) but large cache allowed it to reach 500MHz at a time when 100MHz was the norm.
    All good points. The Alpha did introduce new features and technologies that made it into later architectures though, even just the EV7 interconnect which AMD licensed as their HyperTransport* link in the K8 (amd64). The team working on it did know a few things about CPU design, I guess that was my point, rather than anything magical.

    I'm not entirely convinced though. The trend in mainstream CPU design has certainly been to increase complexity of the on die run-time management of program execution while simplifying the execution units themselves and having more of them. This does form a bottleneck in the execution as dispatch can only occur as fast as the dispatcher can handle and offload fetching, decoding, scheduling/reordering etc despite this being parallelised as much as possible. This limits the number of actual execution units that can be integrated together to improve performance. Simplifying CPU design by performing operations like instruction scheduling at compile time is part of reducing the overhead of program flow management so that ideally each execution unit can eventually act as a node in a self-organising network, like the neural network of synapses and neurons in a biological brain. This network on a chip concept is actually what's used with the Sunway TaihuLight supercomputer.

    * This seems to have been revised from history, certainly there's no mention of it on the relevant Wikipedia entry. But I'm certain it was the case, and there are still Google hits to back it up.

    Leave a comment:


  • timofonic
    replied
    I hope Intel solved iGVT-g support in qemu, using dma_buf or whatever.

    And about Nvidia... Please don't be so ass as usual...

    And about AMD: Please follow the Intel route but do it better and faster for the fist time...

    Leave a comment:


  • ruthan
    replied
    HP-UX is dead and AIX is next.

    Leave a comment:


  • jacob
    replied
    Originally posted by s_j_newbury View Post
    With IA64 dropped from QEMU I think that's a pretty conclusive nail in it's coffin. It's quite sad how IA64 died so young after being used as the justification of murdering competing projects like DEC's Alpha. Compiler tech is probably now about where the Itanium needed it to be to work effectively, and indeed the last iterations of IA64 did work well, although possibly not as well as a contemporary Alpha would have..? There is of course the rumour that CPUs used in the worlds current fastest supercomputer is derived by the Chinese reversing engineering the Alpha!
    There is no way compiler tech could ever be where Itanium needed it, because scheduling instructions for such an ISA im the general case is equivalent to thw Turing machine halting problem. It was a horrible idea from the start and frankly I'm amazed that someone manages to get HP and Intel to take it seriously at all.

    Now don't get me wrong: an explicitly parallel core is GOOD(tm), but the parallel dispatch must occur dynamically, not at compile time. All modern CPUs do that and it's called hyperthreading. An instruction queue is continuously filled from two (or more) *INDEPENDENT* program threads and, after translation, these instructions are fed to a two-way (or more) internal EPIC/VLIW core. Because unlike the compiler, the internal dispatcher has a runtime view of the pipelines and current latencies, it can send individual instructions on the fly as needed. That's the right way to do it.

    As for the Alpha, it is important to remember that there waa nothing magical about it. It was a stock RISC ISA, just like MIPS, SPARC etc. What made it so fast had nothing to do with its design, it came from the fact that the core was largely drawn by hand and hand-optimised to the last degree. That and a "dumb" (and thus quick) but large cache allowed it to reach 500MHz at a time when 100MHz was the norm.

    Leave a comment:


  • thunderbird32
    replied
    Originally posted by squash View Post
    Pretty funny when someone drops your actively sold and developed cpu architecture that has been shipping for 16 years and replaces it with SMP support for a core that doesn't exist in silicon.
    It is actively sold, but I wouldn't say IA64 is still actively developed. Intel has said (unless they've backtracked) the current Itanium is the final iteration.

    Leave a comment:


  • torsionbar28
    replied
    Originally posted by s_j_newbury View Post
    With IA64 dropped from QEMU I think that's a pretty conclusive nail in it's coffin. It's quite sad how IA64 died so young after being used as the justification of murdering competing projects like DEC's Alpha. Compiler tech is probably now about where the Itanium needed it to be to work effectively, and indeed the last iterations of IA64 did work well, although possibly not as well as a contemporary Alpha would have..? There is of course the rumour that CPUs used in the worlds current fastest supercomputer is derived by the Chinese reversing engineering the Alpha!
    Pretty sure QEMU is not even on the radar of IA64 vendors or customers, lol. The main problem with IA64, was that performance was abysmal during its first few years, and in that time, AMD invented the x86-64 Opteron. The market had a choice: a complicated new architecture that doesn't perform well - or 64 bit goodness and backwards compatibility for familiar old x86. That's a no brainer. No wonder in its 16 year history, IA64 never made it higher than 4th place in market share (behind x86-64, POWER, and SPARC). It was the answer to a question nobody asked. With 64 bit CPU's, intel fumbled, and AMD picked up the ball and ran with it.

    HP is the only vendor still shipping IA64 servers since 2015. And the only OS they still support on it, HP-UX, is just as dead. The current version, 11.31, has been around for over a decade now with only minimal updates to it, and no plans for any new major version. Microsoft, Red Hat, and everyone else dropped support for IA64 many years ago, because nobody aside from HP is making IA64 based servers any longer.

    IA64 and HP-UX, a marriage made in computing hell. Good riddance to both. Even intel has admitted IA64 is dead - look at the latest 2017 (final) version of it, code name Kittson. It offers literally no change whatsoever over the previous model. The highest clocked model gained 100 Mhz. Whoop-de-doo. No process shrink, still 32 nm. No new memory controller, still DDR3. No new instructions. No changes at all. It's a marketing rebranding exercise. The IA64 engineering staff has all been laid off or reassigned.

    PS. I have active HP-UX Certified Systems Administrator certification, and worked for HP for many years.
    Last edited by torsionbar28; 15 November 2017, 02:41 PM.

    Leave a comment:


  • squash
    replied
    Pretty funny when someone drops your actively sold and developed cpu architecture that has been shipping for 16 years and replaces it with SMP support for a core that doesn't exist in silicon.

    Leave a comment:


  • AndyChow
    replied
    Originally posted by thunderbird32 View Post
    Unsurprising they dropped IA64 as even Intel has basically admitted it's a dead-end platform. Weird to see them drop AIX though.
    They don't have the hardware to test it. You can read about it here. I know a few people that use AIX daily, and it's pretty much still rock solid.

    Leave a comment:


  • thunderbird32
    replied
    Unsurprising they dropped IA64 as even Intel has basically admitted it's a dead-end platform. Weird to see them drop AIX though.

    Leave a comment:


  • s_j_newbury
    replied
    With IA64 dropped from QEMU I think that's a pretty conclusive nail in it's coffin. It's quite sad how IA64 died so young after being used as the justification of murdering competing projects like DEC's Alpha. Compiler tech is probably now about where the Itanium needed it to be to work effectively, and indeed the last iterations of IA64 did work well, although possibly not as well as a contemporary Alpha would have..? There is of course the rumour that CPUs used in the worlds current fastest supercomputer is derived by the Chinese reversing engineering the Alpha!

    Leave a comment:

Working...
X