Announcement

Collapse
No announcement yet.

NVIDIA Announces Grace CPU For ARM-Based AI/HPC Processor

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by Jumbotron View Post
    Apple... Qualcomm... Samsung... Google... Broadcom...

    So that leaves Nvidia with little choice but to go with Mediatek.
    Isn't it interesting how you left out every Chinese SoC and GPU maker?

    Comment


    • #82
      Originally posted by coder View Post
      Isn't it interesting how you left out every Chinese SoC and GPU maker?
      Why ?

      Comment


      • #83
        Originally posted by coder View Post
        That's great and all, but really not so different than what they did with their Tegra SoC's that made it into some phones, tablets, a Google Pixel notebook, and finally Nintendo Switch. At the end of the day, they're still going to be APUs, with APU-level graphics horsepower.

        What I think most of us really want to see is ARM PCs with like PCIe slots and stuff. As far as I'm aware, the closest to that is the Huawei Kunpeng-powered machine launched almost a year ago.
        Well...I would hope that you know that the old Nvidia Shield is still the most performant Android TV streamer after all these years. Even with an old ARM Tegra SoC with a Maxwell based GPU.

        And I love how you equate performance with something that has antiquated "PCIe slots and stuff". How 20th Century. And old school. And irrelevant.

        The best version (confirmed by game reviews) of Cyberpunk 2077 is from Google's Stadia steamed to a Chromebook. And not a PCIe slot or "stuff" in sight.

        Comment


        • #84
          Originally posted by oiaohm View Post
          Remember Mediatek is china based
          Don't you mean Taiwanese? It makes a difference! I assume they have offices & manufacturing in the mainland, but their HQ is in Hsinchu, Taiwan.

          Originally posted by oiaohm View Post
          I would not say Nvidia deal with Mediatek is perfectly long term stable if the USA trade dispute keeps on getting worse.
          Until China invades Taiwan, they should be fine.

          Comment


          • #85
            Originally posted by Jumbotron View Post
            Why ?
            If you're trying to make projections about the competitive landscape, they cannot be ignored.

            Comment


            • #86
              Originally posted by Jumbotron View Post
              And I love how you equate performance with something that has antiquated "PCIe slots and stuff". How 20th Century. And old school. And irrelevant.
              Trying to defend with rhetoric what you can't refute with facts is not actually winning.

              The performance advantage of dGPUs is irrefutable. PCIe slots are good for other things, besides. True, they're not as essential as they once were, and eGPUs can do alright, but ARM will never overtake the PC market until there are comparably-priced, standard form-factor ARM boards with PCIe 4 slots that can run dGPUs and NVMe SSDs. Maybe you don't care about that stuff, but a lot of people do.

              Originally posted by Jumbotron View Post
              The best version (confirmed by game reviews) of Cyberpunk 2077 is from Google's Stadia steamed to a Chromebook. And not a PCIe slot or "stuff" in sight.
              This is ironic, because essentially what you're saying is that GPU power is irrelevant. So, then what's even the point of spamming about Nvidia GPUs in Mediatek chips, if you're just going to render it in the cloud? And the CPU cores in today's chromebooks are either a couple generations old ARM cores or Intel Gemini Lake. So, you're also saying that CPU performance is irrelevant.

              Well, I'm no expert on game streaming, but I know enough to say that it works only in certain geographic areas. And lots of games aren't available on game streaming platforms. So, that's two reasons people might care. And to the extent that they can just use game streaming, then basically all the rest of your points are irrelevant.

              Comment


              • #87
                Originally posted by coder View Post
                Don't you mean Taiwanese? It makes a difference! I assume they have offices & manufacturing in the mainland, but their HQ is in Hsinchu, Taiwan.
                Until China invades Taiwan, they should be fine.
                coder if the USA government starts looking at ownership of Mediatek closely they will find over 51% china. As long as the USA keeps on believing HQ in Taiwan makes them Taiwan everything will be good.

                Comment


                • #88
                  Originally posted by coder View Post
                  If you're trying to make projections about the competitive landscape, they cannot be ignored.
                  I did not leave ANY other SoC manufacturer out because of any nefarious or purposeful reason. But now that you brought that topic up by your insinuation I will say that outside Apple, Google, Nvidia, Mediatek, Samsung and Qualcomm there are no other SoC designers and or manufacturers that matter. Not even combined. There may be some obscure ones in China, but outside China, when you combine the American market, the Euro market and the Indian market, no other ARM or RiscV SoC designers need be mentioned other than the 6 mentioned above.

                  Comment


                  • #89
                    Originally posted by coder View Post
                    Trying to defend with rhetoric what you can't refute with facts is not actually winning.

                    The performance advantage of dGPUs is irrefutable. PCIe slots are good for other things, besides. True, they're not as essential as they once were, and eGPUs can do alright, but ARM will never overtake the PC market until there are comparably-priced, standard form-factor ARM boards with PCIe 4 slots that can run dGPUs and NVMe SSDs. Maybe you don't care about that stuff, but a lot of people do.


                    This is ironic, because essentially what you're saying is that GPU power is irrelevant. So, then what's even the point of spamming about Nvidia GPUs in Mediatek chips, if you're just going to render it in the cloud? And the CPU cores in today's chromebooks are either a couple generations old ARM cores or Intel Gemini Lake. So, you're also saying that CPU performance is irrelevant.

                    Well, I'm no expert on game streaming, but I know enough to say that it works only in certain geographic areas. And lots of games aren't available on game streaming platforms. So, that's two reasons people might care. And to the extent that they can just use game streaming, then basically all the rest of your points are irrelevant.
                    No. And your comment shows a complete and profound lack of discernment of past compute platforms and their evolution to today and into the near future ( by 2030 ).

                    GPUs will ALWAYS matter. But what matters most will be WHERE those GPUs will be located. Will they be embedded in a SoC? Will they be on an antiquate PCIe slot? Will they be on an a chiplet or series of chiplets chained together by CXL or Infinity Fabric or CCIX or GenZ links ?

                    OR......offsite and offboard altogether ? ALA STADIA ?

                    You see....with the advent of 5G we are fast approaching the time when cellular latencies will approach that of the latencies found on PCIe slots and across the bus of a motherboard. This will be born out as fact with the advent in 5 years of 5.5G and will be obvious with the rollout of 6G by 2030 it which one of the PRIMARY objectives of 6G is NOT JUST be the inexerable march of speed in MBPs down and up but the ORDER OF MAGNITUDE DECREASE in latency.

                    What that will entail is that even your Linux desktop of the future much less your smartphone and Chromebook will have the power of the ENTIRE data center at your disposal with latencies that rival some board connected to a PCIe slot.

                    Personal Desktop Computing or what we used to call the "Micro-computer" was born from both the economies of scale to put relatively cheap at the time tech into a small form factor....compared to the prevailing compute paradigm of that day which was the "Mini-Computer". IF your old enough like me, the "Mini-Computer" was a revelation because before the "Mini-computer" the prevailing compute paradigm was the Mainframe. And ROOM size Mainframes at that. The "Mini" could fit on a desk. A really LARGE desk but a desk. So really it's the first "desktop" computer. But the first PERSONAL computer was the even smaller "Micro-computer".

                    This evolution in "smallness" has continued apace to the point of smartphones with the power of Cray Supercomputers from the late 90's and early 2000s. Raspberry Pi's, IoT, etc.

                    But....Moore's Law is dead. And making everything smaller is dead too. Now we see in the x86 world that monolithic cores is a diminishing thing. Particularly, like Carrizo and Bristol Ridge, designed as a complete, monolithic, heterogenous compute wafer. Intel saw the future and never really put the GPU on the die, they put their shitty GPUs in the package. Instead of SoC, it was SiP. System in a Package.

                    Apple has even done this with M1 to get WAY more performance out of its existing base as the A14 SoC. They took the RAM off the "slot" and put in into the SiP and tightly linked to the SoC. Apple's M1 is a SoC in a SiP.

                    AMD has gone whole hog "chiplets". It's starting to look like the entire CPU package is the size of an entire Raspberry Pi.

                    Pretty soon, you won't have room for slots. Because the entire motherboard WILL BE THE SiP. It will be like Cerebras's Wafer sized compute platform where the freaking wafer is 1/3 of the entire rack !!

                    Nope.....we have reached a two tier compute design paradigm when it comes to "personal" computing. It will be.....

                    1: Increasingly consolidated but MORE heterogenous with highly specialized chips for specific tasks place not on the board or even the SoC but by the sensor that needs such hyper-local compute. Think smartphones with computational photographic elements where the compute DSP or NPU is at the lens itself and NOT on a DPS or NPU in the SoC. You can still have a SoC with both a DSP and NPU, but you will need those for other AI and sensor needs APART from photography seeing as how just the camera has its own compute unit where it needs it and when it needs it.

                    2: Increasingly OFF board and OFF device. Streaming game services such as Stadia and GeForce Now point the way. But ANY need for data warehouse compute needs can be streamed to whatever device you need, from smartphones to your "Desktop" and this will be obvious and prevalent by 2030 and the rollout of 6G cellular.

                    Between these two paradigms the need for someone to plug a compute node into a "PCIe" slot will diminish precipitously.

                    Think of it like this. When's the last time you plugged in a 56k dial up modem card? Particularly the ones with Lucent chips and NOT the Rockwell chip? In fact....when's the last time you even plugged in a modem card at all? It's all on the motherboard now.

                    Sound card anyone? 98% of the world doesn't even know they STILL make sound cards.

                    What about Firewire? When's the last time you needed a discreet controller chip on a card to ingest video, now that USB 3.0.x appeared? Yes, the Pros still need cards from AJA, BlackMagic and Bluefish, etc, but the 98% of the people out there, USB 3.0.x is all they will ever need. Hell, even APPLE doesn't produce Firewire anymore.

                    Consolidation, integration, miniaturization. That is how we now have the prevailing compute paradigm which has driven "Desktop" sales into the gutter. The "desktop" now is your LAPTOP. But your REAL personal compute platform is your phone, followed by your tablet and followed after that by your Chromebook.

                    And all those platforms have either as standard equipment or as an option upon ordering, cellular capability. Which not a single damn one of them needs a "PCIe" slot.

                    And for REAL DESKTOPS that need will be less going into 2030.
                    Last edited by Jumbotron; 15 April 2021, 09:24 PM.

                    Comment


                    • #90
                      Originally posted by coder View Post
                      That's great and all, but really not so different than what they did with their Tegra SoC's that made it into some phones, tablets, a Google Pixel notebook, and finally Nintendo Switch. At the end of the day, they're still going to be APUs, with APU-level graphics horsepower.

                      What I think most of us really want to see is ARM PCs with like PCIe slots and stuff. As far as I'm aware, the closest to that is the Huawei Kunpeng-powered machine launched almost a year ago.
                      I wouldn't be surprised now with this "dipping the toe" into the Chromebook market by the partnership between Nvidia and Mediatek, that Nvidia at some point doesn't make an Nvidia branded SHIELD Chromebook, particularly after Google debuts their own custom SoC codenamed "Whitechapel" in the Pixel 6 this fall.

                      The SHIELD brand name still has come cache in the Android world. Also, every single Nintendo Switch, the number one video game platform apart from Android gaming all have Nvidia ARM SoCs in them.

                      I could see Nvidia making a bold statement in the gaming world through Chromebooks with Nvidia SoCs where there is a synergistic compute paradigm between their GeForce NOW platform streaming into a Nvidia SoC and Nvidia GPU where the internal GPU accelerates certain rendering routines and AI fed to it from the prevailing stream form the GeForce NOW platform.

                      Comment

                      Working...
                      X