Announcement

Collapse
No announcement yet.

LLVM Patches Confirm Google Has Its Own In-House Processor

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    I strongly suspect this is their version of intel's phi.
    Using even tinier arm cores with lots of registers (and hopefully lots of floating point performance...if that happens then I'm nearly certain).

    Comment


    • #22
      Why do they even release this? Do they have plans to get them out soon or something?

      Comment


      • #23
        I guess it has something todo with their custom made network structure. They use some sort of matrixrouting in their bigger compute clusters. So every node has low latency access to that row / column. They therefore built custom network switches. Thats probably why they have their own processor, which does the packet routing in a highly parallel fashion.

        Comment


        • #24
          Originally posted by M@yeulC View Post
          Why do they even release this? Do they have plans to get them out soon or something?
          easier to maintain. If it´s mainlined, you don´t need to merge it all the time, when you pull from origin/master

          Comment


          • #25
            Originally posted by cb88 View Post
            Whatever...as long as distros like gentoo, funtoo, void and alpine support 32bit. Everybody else is on the feature creep bandwagon anyway so they are pretty much garbage in my eyes.

            Most applications don't even remotely need 64bit support... or need it because they are written badly. In fact 32bit applications have a potential to run faster due less wasted instruction cache and dram bandwidth.
            64bit processors don't just have newer 64bit instructions or the ability to access > 4GiB of RAM. They have a lot more cpu registers that an application can access as well. This is why a 64bit program will usually out perform a 32bit program as they can keep more data in the CPU.

            Comment


            • #26
              Originally posted by M1kkko View Post
              I don't think a backend like that should be merged into mainstream, as it has absolutely no use to anyone besides Google. I'd say let them do the maintaining.

              Could be that they plan to expose it through Google Cloud Services. They also don't say that they won't EVER make it publicly available, just that it's not right now.

              Comment


              • #27
                Originally posted by cb88 View Post
                It is worth mentioning that most 32bit processors have very large physical address spaces... just the address space per application is 4Gb. Sparc since at least version 8 has had 32Gb of address space. x86 processors usually have 48bit physical address spaces since roughly the same era...

                It's just never been practical to attach that much memory to any of those processors... nor worthwhile.
                It seems to me that not attaching that much memory to 32-bit CPUs is a direct consequence of the evolution of memory manufacturing technologies (memory capacity) through time. In the 8-bit computer era with 16-bit addresses (1970-ties and 1980-ties), it was quite common to have more than 64K of memory in the machine (ZX Spectrum 128, Atari 130XE, etc). After that, the same holds true for machines with 16-bit Intel CPUs which contained from 1MB to 16MB of memory while having only 16-bit registers. After that, the jump from 16-bit CPUs to 32-bit CPUs was too fast for memory manufacturing technologies to catch up (64K -> 4GB). The jump from 32-bit CPUs to 64-bit CPUs is so big that it is quite possible that memory manufacturing technology won't be able to overrule it within our lifetimes.

                Just an interaction of exponential growth with linear growth.

                Comment


                • #28
                  Originally posted by boxie View Post
                  64bit processors don't just have newer 64bit instructions or the ability to access > 4GiB of RAM. They have a lot more cpu registers that an application can access as well. This is why a 64bit program will usually out perform a 32bit program as they can keep more data in the CPU.
                  You are mixing concepts. CPU "bitness", has nothing to do with amount of registers. Register count is an architecture OR implementation specific detail.

                  Comment


                  • #29
                    Originally posted by milkylainen View Post
                    You are mixing concepts. CPU "bitness", has nothing to do with amount of registers. Register count is an architecture OR implementation specific detail.
                    In the x86 world, he's right. The amd64 architecture (used by nearly all PCs right now) brought 64bits, extra registers, and new instructions in a single package. You could run your 32bit programs on it, but lose access to the fancy new registers and instructions.

                    The speed gain by those registers can be offset by the larger pointers however. Depends on your workload. This is where the X32 ABI comes in, by giving you full access to the amd64 features but keep pointers at 32bits. Great for speed, not so great for compatibility. Sometimes this you see this implemented at the application level, like Erlang's "half-word" vm.

                    Comment


                    • #30
                      Originally posted by boxie View Post
                      64bit processors don't just have newer 64bit instructions or the ability to access > 4GiB of RAM. They have a lot more cpu registers that an application can access as well. This is why a 64bit program will usually out perform a 32bit program as they can keep more data in the CPU.
                      You are thinking about x86, whose 32bit Processors are ugly dogs and the switch to 64bit comes with a row of independent improvements and cleanups. To a lesser extend this is also true for ARM, just as x86 their 64bit processors have a whole different instruction set, unlike MIPS where 64bit is just added instructions (means they dont need multiple instruction decoders for 64bit)

                      32bit CPUs still are relevant and will be for a long time, I guess google wants to get into the IoT hype, a CPU with some integral network HW is ideal for some small sensors or actuators you place around and connect via wire or wlan to a controlling hub. Just look at this project to get an idea: https://www.kickstarter.com/projects...-a-box-dev-kit
                      Last edited by discordian; 10 February 2016, 06:22 AM.

                      Comment

                      Working...
                      X