Announcement

Collapse
No announcement yet.

Intel Has A Single-Chip Cloud Computer

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Has A Single-Chip Cloud Computer

    Phoronix: Intel Has A Single-Chip Cloud Computer

    Intel demonstrated today their "single-chip cloud computer" processor that offers an impressive 48 cores. While there are 48 cores on a single chip, this Intel processor only consumes as much power as two standard household light bulbs (this "futuristic" processor is operating between 25 and 125 Watts)...

    http://www.phoronix.com/vr.php?view=Nzc2NQ

  • #2
    Does anyone else completely fail to see what this has to do with cloud computing? Isn't cloud computing all about internet based computing?

    Comment


    • #3
      It does seem like a bit of marketing spin, but if this processor can reduce the number of boxes needed at a datacenter to support "cloud computing" then it might make some sense as a label.

      Comment


      • #4
        Cloud computing as a term is not much worth any longer... Everybody uses it in order to get in on the hype. My definition of "cloud computing", is not much more than a way to tell something is new :P

        Comment


        • #5
          Larabee

          Interesting number, 48 cores. That's exactly what their 'larabee' product is supposed to have. My guess is that it's just the same thing.

          Comment


          • #6
            The compare it to a "cloud" because the 48 cores do not share caches and use message-passing to communicate, thus making it more like "48 separate CPUs on a chip" than "48 cores".

            Besides: 48 cores on a research CPU is nothing.

            Azul systems is selling a 54 core CPU since 2008, the Vega 3. They use it in special Java accellerator boxes with up to 864 cores per system.

            Comment


            • #7
              I have a feeling that this is somehow a crippled CPU.

              The links says it only have 4 memory banks, so compared with todays CPU's, each core have a much lower bandwidth access.

              So I guess you will be able to find cases where a 8 core CPU can kicks this CPU's ass.

              Maybe for a very specialized type of calculations, where everything is pure calculations and very little data, this CPU will be good.

              But I really can't think of any such cases.

              Comment


              • #8
                Originally posted by wateenellende View Post
                Interesting number, 48 cores. That's exactly what their 'larabee' product is supposed to have. My guess is that it's just the same thing.
                Not quite. With Larabee is closer to the PS3 model, where you have one general and fast CPU, and then many smaller specialized cores, which are not full featured X86.

                Here they have all cores full x86, but the catch is very very very bad memory bandwidth. 48 have to share 4 memory banks.

                Comment


                • #9
                  Originally posted by Louise View Post
                  Not quite. With Larabee is closer to the PS3 model, where you have one general and fast CPU, and then many smaller specialized cores, which are not full featured X86.
                  Except Larrabee is intended to be a GPU, and that silly Cell processor was intended to be a main CPU (even though it shares many characteristics with GPUs).

                  Comment


                  • #10
                    Originally posted by thefirstm View Post
                    Except Larrabee is intended to be a GPU, and that silly Cell processor was intended to be a main CPU (even though it shares many characteristics with GPUs).
                    Yes, and Sony have finally realised that Try searching for IBM, Cell, cancel

                    So my guess is that the PS4 will have a Larrabee solution, as Sony likes to try out new technologies, and XBox 720 will have an AMD solution, since nVidia is very likely not even interested in working with MS again after the XBox 1, where MS trashed millions of SouthBridges, giving nVidia a loss in one quarter.

                    Comment


                    • #11
                      hmm.. what's stronger: 48x 8086 or 1x 80386?

                      This really doesn't mean much.

                      Comment


                      • #12
                        Originally posted by Louise View Post
                        Yes, and Sony have finally realised that Try searching for IBM, Cell, cancel

                        So my guess is that the PS4 will have a Larrabee solution, as Sony likes to try out new technologies, and XBox 720 will have an AMD solution, since nVidia is very likely not even interested in working with MS again after the XBox 1, where MS trashed millions of SouthBridges, giving nVidia a loss in one quarter.
                        Microsoft and AMD together. Yet another reason not to buy products from either of them.

                        Comment


                        • #13
                          AMD Core Counts and Bulldozer: Preparing for an APU World
                          http://www.anandtech.com/cpuchipsets...oc.aspx?i=3683

                          Who are these guys? And where do they get those wonderful toys from?

                          AMD did add that eventually, in a matter of 3 - 5 years, most floating point workloads would be moved off of the CPU and onto the GPU. At that point you could even argue against including any sort of FP logic on the "CPU" at all. It's clear that AMD's design direction with Bulldozer is to prepare for that future.
                          I wonder that would mean for programmers.
                          Last edited by Louise; 12-03-2009, 05:46 PM.

                          Comment


                          • #14
                            I'm not sure moving *all* of the floating point logic off the CPU would ever make sense, since all kinds of programs use floating point variables and you still want those programs to run without falling back to SW floating point emulation.

                            I believe the discussion is more about the SIMD instruction extensions, which work on explicitly vectorized code.
                            Last edited by bridgman; 12-03-2009, 06:16 PM.

                            Comment


                            • #15
                              Originally posted by bridgman View Post
                              I'm not sure moving *all* of the floating point logic off the CPU would ever make sense, since all kinds of programs use floating point variables and you still want those programs to run without falling back to SW floating point emulation.
                              Hopefully for Linux programmers it will just be a case of
                              Code:
                              sed -i 's/%f/%f_gpu/g' *


                              Are there other things that GPU's are unquestionable better at than CPU's?

                              I suppose that preforming FP calculations are very parallelizable?

                              Originally posted by bridgman View Post
                              I believe the discussion is more about the SIMD instruction extensions, which work on explicitly vectorized code.
                              So something like the Cell architecture?

                              Comment

                              Working...
                              X