Announcement

Collapse
No announcement yet.

Intel Has A Single-Chip Cloud Computer

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by movieman View Post
    Personally I've never thought that Larrabee made much sense -- why put x86 instructions into a massively parallel architecture if you could use a new instruction set and eliminate all the complex instruction decoding? -- but I wouldn't write it off yet.
    My guess is that Intel views Larrabee as a jumping point into the same HPC computing space that NVidia seems to be betting their company on, rather than just a video card. If you view the hardware as being for more general purposes and not just 3D acceleration then keeping the x86 ISA could become a selling point.

    Comment


    • #22
      Originally posted by smitty3268 View Post
      My guess is that Intel views Larrabee as a jumping point into the same HPC computing space that NVidia seems to be betting their company on, rather than just a video card. If you view the hardware as being for more general purposes and not just 3D acceleration then keeping the x86 ISA could become a selling point.
      I'm thinking that Larrabee is more of a R&D product who's advancements will show up in other venues like the CPU much like the i740 graphics which had technologies that would later carry on in the intel GMA line of IGP's.

      Comment


      • #23
        Originally posted by movieman View Post
        Of course they may decide never to build a second version with the lessons they've learned from this one, but they haven't killed Itanium yet so Larrabee may still be with us a decade or two from now.
        Also given that Itanic is being effectively being kept 'alive' (and I really use that term generously) by HP, I would say it has been for all intense purposes dead for years. Last chip was built on 90nm and it's successor has yet to be seen. Compaq kept Alpha 'alive' for years too and we all know what happened to it.

        Comment


        • #24
          Originally posted by deanjo View Post
          Also given that Itanic is being effectively being kept 'alive' (and I really use that term generously) by HP, I would say it has been for all intense purposes dead for years. Last chip was built on 90nm and it's successor has yet to be seen. Compaq kept Alpha 'alive' for years too and we all know what happened to it.
          Yeah, Intel bought Alpha and killed it...

          There are very good reasons for maintaining several major architectures. Itanic may have it's problems, but if you killed it completely, it would be a loss for anyone interested in diversity in IT.

          Comment


          • #25
            Originally posted by RobbieAB View Post
            Yeah, Intel bought Alpha and killed it...

            There are very good reasons for maintaining several major architectures. Itanic may have it's problems, but if you killed it completely, it would be a loss for anyone interested in diversity in IT.
            Oh I have nothing against diversity but the market that Itanic was supposed to address got clubbed with a 400 pound pole when AMD brought out their 64-bit solution. Bang for the buck it pretty much snuffed IA-64 out of real existence. ultraSPARC and PPC's thrived better then itanic ever did. IDC predicted IA-64 systems sales will reach $38bn/yr by 2001 but to my knowledge itanic's peak was around 1 billion in sales in 2004.
            Last edited by deanjo; 05 December 2009, 06:56 PM.

            Comment


            • #26
              Well... IA64 was also being beaten by Alpha until Intel bought and killed it. IA64 had major issues, but if we consider the number of "big chip" designs now against 10 years ago, it's a worrying trend. Alpha and MIPS effectively dead, PPC PPC, Sparc, and IA64 essentially gone from the workstation market. AMD64, good as it is, is still carrying handicaps deriving from it's x86 origins. Admittedly, Itanic is a tad irrelevant in the context of that trend as it's an Intel chip.

              On a different level, one has to wonder how much Intel learned from the Itanic project which has since been fed back into their x86(_64) chip range.

              Comment


              • #27
                Originally posted by smitty3268 View Post
                If you view the hardware as being for more general purposes and not just 3D acceleration then keeping the x86 ISA could become a selling point.
                But if you're going to have to recompile anyway, then you don't care what the underlying instruction set is, just how fast it can execute your code; which will almost certainly be faster if you can eliminate all those transistors and pipeline stages required to decode the complex x86 instruction set.

                Comment


                • #28
                  Originally posted by RobbieAB View Post
                  On a different level, one has to wonder how much Intel learned from the Itanic project which has since been fed back into their x86(_64) chip range.
                  Which is also why I said I see larrabee more as a R&D project who's tech will eventually be carried on into other intel products.

                  Comment


                  • #29
                    Originally posted by movieman View Post
                    But if you're going to have to recompile anyway, then you don't care what the underlying instruction set is, just how fast it can execute your code; which will almost certainly be faster if you can eliminate all those transistors and pipeline stages required to decode the complex x86 instruction set.
                    Makes me wonder... if x86 is such a bad thing, why has nobody yet produced an x86 chip where you can switch off the translation layer?

                    Comment


                    • #30
                      Originally posted by Ant P. View Post
                      Makes me wonder... if x86 is such a bad thing, why has nobody yet produced an x86 chip where you can switch off the translation layer?
                      Because the translation layer is what makes it run at a reasonable speed .

                      Essentially it takes the complex x86 instructions and turns them into a sequence of simple RISC-type instructions which can be dynamically decoded as they're executed. I don't know about AMD, but recent Intel chips cache the translated instructions so they don't need to decode x86 instructions all the time.

                      Comment

                      Working...
                      X