Announcement

Collapse
No announcement yet.

Itanium IA-64 Was Busted In The Upstream, Default Linux Kernel Build The Past Month

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by microcode View Post
    In reality, intel had money out the wazoo at that time; if compiler engineers could figure out how to run general purpose code quickly on an in-order VLIW without mountains of NOP slots and I$ abuse, intel would have hired them.

    If there is any place in general purpose software for VLIW, it is as a supplement or assist to OoO, rather than a replacement for it.
    If you haven't looked at the Mill, I'd suggest you do. The answer is pretty weird. Elided no-ops, implicit destination register, model-specific entropy optimized binary encoding, a Split instruction stream, and dual I$. The bigger question is weather they'll ever get enough funding, and if their load solutions are enough to overcome cache nondeterminism.

    Comment


    • #42
      Originally posted by WorBlux View Post

      If you haven't looked at the Mill, I'd suggest you do. The answer is pretty weird. Elided no-ops, implicit destination register, model-specific entropy optimized binary encoding, a Split instruction stream, and dual I$. The bigger question is weather they'll ever get enough funding, and if their load solutions are enough to overcome cache nondeterminism.
      Yeah, the dual I$/symmetric basic block thing is super cool; I have very low expectations of Mill in practice though, it is absolutely perfect vaporware, no offense to the wise man.

      The thing that tells me it's not going to happen is that, as far as I can tell, they haven't shown a functioning demo of any form. Not a simulator, not an implementation on FPGA, and zero tapeouts to date.

      Comment


      • #43
        Originally posted by microcode View Post

        Yeah, the dual I$/symmetric basic block thing is super cool; I have very low expectations of Mill in practice though, it is absolutely perfect vaporware, no offense to the wise man.

        The thing that tells me it's not going to happen is that, as far as I can tell, they haven't shown a functioning demo of any form. Not a simulator, not an implementation on FPGA, and zero tapeouts to date.
        We'll see, I've been following it fairly closely. It's been slow, but I'm expecting another 4-5 patents* to drop this year along with some sort of announcement. It's definately still active, if not particularly visible. *At least one dealing w/ coherence, and one dealing with scalable vectors/streams.

        I understand the skepticism, but I wouldn't count them out quite yet.
        Last edited by WorBlux; 19 January 2021, 06:17 PM.

        Comment

        Working...
        X