Announcement

Collapse
No announcement yet.

NVIDIA Reportedly Near Deal To Buy Arm For $40+ Billion Dollars

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by vladpetric View Post

    The main reason NVidia dominates the discrete GPU market https://wccftech.com/nvidia-geforce-...on-in-q2-2020/ is their drivers. They don't generally have better hardware than AMD (at times AMD gpus were in fact better from a hardware standpoint), but their software has kicked ATI/AMDs butt for more than 20 years now.

    You can't realistically expect them to open source their graphics drivers, it would be completely suicidal (and worth around ~100 billion off their market cap)
    I think you underestimate how impossible it would be for AMD to simply take a future open-source NVidia driver and make it magically work for their hardware. Drivers are an example of software that really don't hold any IP value.

    Or do you think they are worried about reverse engineering? Hah, if I had the NVidia driver source and their internal hardware docs, I still wouldn't be able to rent a multi-million pound fabrication facility to create a GPU.

    Sure, Apple might be able to steal it but unlike their poor defenceless users, NVidia could sue the pants off them and possibly make even more money from it! It would be cheaper for Apple to buy them (and ARM as a big bundle deal .
    Last edited by kpedersen; 13 September 2020, 12:43 PM.

    Comment


    • #62
      Originally posted by PerformanceExpert View Post
      See eg. the Nuvia graphs for Geekbench. There are also SPEC2006 results. Arm cores achieve better performance while using far less power by aiming for high IPC rather than 5GHz.
      I'm not very confident in NUVIA claims. First we haven't seen them IRL, second no revolution really ever happens (Ryzen is a very bold move, but AMD did not crushed intel that much - expect 10 to 15% improvement other a concurrent, but 40% seems a commercial claim for foolish people).

      NUVIA is pushing a SOC in a server - that is indeed interesting, but I am not sure how many datacenters/pro ITs will try this.

      Comparing a SOC that don't care about multiple pci-express buses, super fast RAM bus and enormous cache is interesting, but you have to prove that we don't need all those extra features that Intel/AMD provide.

      Don't misunderstand: I have always supported ARM, I like clean designs, I think it works great in embedded and specific applications, but I think there are some fantasies on ARM being that much superior to x64, and the software and hardware ecosystem is somewhat lagging in existence or support.

      Comment


      • #63
        Originally posted by kpedersen View Post

        I think you underestimate how impossible it would be for AMD to simply take a future open-source NVidia driver and make it magically work for their hardware. Drivers are an example of software that really don't hold any IP value.

        Or do you think they are worried about reverse engineering? Hah, if I had the NVidia driver source and their internal hardware docs, I still wouldn't be able to rent a multi-million pound fabrication facility to create a GPU.

        Sure, Apple might be able to steal it but unlike their poor defenceless users, NVidia could sue the pants off them and possibly make even more money from it! It would be cheaper for Apple to buy them (and ARM as a big bundle deal .
        The term driver is a bit of a misnomer here, because it implies something that just controls the hardware (like the driver of an ethernet card or ssd drive).

        But gpu drivers are complex and smart rendering pipelines, and what NVidia does is good hardware/software co-design.

        If their rendering stack was open source, reverse engineering the hardware part wouldn't be that difficult. Thing is, CUDA cores are really simple things (unlike the CPU cores), and they derive their powers from high numbers. And AMD has some pretty good architecture too ...

        Comment


        • #64
          Nvidia is going forward with buying ARM?

          If I had a vote on it ... it would be ...

          Comment


          • #65
            Originally posted by nuhamind2 View Post
            if nvidia did buy it, will it fall under sanction ba for huawei too
            Arm is already banned from working with Huawei because there is some US IP involved in ARM. This is the US blowing it's brains out again for cheap domestic politics where hate rules over strategy. Any company that has licensed any US tech can be banned from selling to Huawei or any other entity the US decides to have a hissy fit against. That means if you work with Americans your company can evaporate at any minute. That presents a very strong incentive for companies to not co-operate with or license from US companies. That will hurt investment in the US.

            The other side of this is that we are headed for a world where the NATO states use one technology and the rest of the world uses another as China and partners pour ungodly sums of money into building up their chip design and fab capabilities and with greater economy of scale by orders of magnitude they will eventually eclipse the technology coming out of the US. The US is trading short term gain for long term failure. You can expect open source software and collaborative research to get dragged into this fight as well, it will hurt every one.

            Comment


            • #66
              Originally posted by PerformanceExpert View Post

              The graph does not show Nuvia's estimate for their future core, so why even mention it? And the graph shows an Arm core from last year beating high-end x86 cores on raw performance and power. You have to be totally blind to not be able to read that graph - one axis is single core performance, the other is power (higher and more to the left is better).

              Arm cores are already faster than x86 and are using far less power at the same time - so are the laws of physics violated?
              The only ARM cores that do that in independent, third party tests are Apple's ARM cores. Apple put a lot of time and resources into building a decent microarchitecture, which is why they can afford to switch to ARM. Anyway, buying ARM won't help with that (their designs have improved in recent years, but are nowhere near desktop grade ...).

              Benchmarks done by the same company aren't scientific. E.g., in machine learning, to keep it honest and avoid overfitting, you have distinct sets for training and testing.

              Nothing like that is even done in our world. The methodology and conflict of interest inherent to CPU benchmarking are really mind boggling.

              Comment


              • #67
                Originally posted by PerformanceExpert View Post

                The graph does not show Nuvia's estimate for their future core, so why even mention it? And the graph shows an Arm core from last year beating high-end x86 cores on raw performance and power. You have to be totally blind to not be able to read that graph - one axis is single core performance, the other is power (higher and more to the left is better).

                Arm cores are already faster than x86 and are using far less power at the same time - so are the laws of physics violated?
                I have mentioned NUVIA Orion just for the context. If the company is very quick to advertise non-existing part with imaginary scores, other scores form the same company of the already existing parts should be questioned at least. But OK, let's assume graphs are correct. Since you are not "blind" as me, you can clearly see that nowhere in those graphs ARM has an absolute performance lead. What they show, is that ARM can roughly match x86 performance at lower power. That was my point. While I agree that this implies ARM could have much better performance at matching power (say ~20W per core), no graphs actually show that. Should I take for granted that ARM will certainly scale near perfectly with higher power and frequency which eventually will lead ARM to completely destroying x86 in performance? I do not think so. I need to see that test in real world - code compile, render, compress/decompress, crypto (without out-of-the-core HW engines). Personally I think there is no ARM on the planet at this moment that can do shit in single thread against, say, ZEN2 core clocked at 4.7GHz in the mentioned scenarios, but I might be wrong.

                I agree that ARM may be faster when it comes to some whitepaper/theory level semi-bullshit benchmarks and models, such as some arbitrary generic ISA integer instructions stream, and how it can handle them in faster/more efficient manner strictly from the perspective of core uArch, yes. But when it comes to the real world loads, first - we have no decent ARM desktop in general to validate that, and second, performance will be garbage most likely. Maybe you are right about ARM uArch stuff, like being more advanced and so on, but every time I see ARM benchmarks in this site, performance is shit every single time Sorry, but that is true.
                Last edited by drakonas777; 13 September 2020, 01:49 PM.

                Comment


                • #68
                  Originally posted by drakonas777 View Post

                  I have mentioned NUVIA Orion just for the context. If the company is very quick to advertise non-existing part with imaginary scores, other scores form the same company of the already existing parts should be questioned at least. But OK, let's assume graphs are correct. Since you are not "blind" as me, you can clearly see that nowhere in those graphs ARM has an absolute performance lead. What they show, is that ARM can roughly match x86 performance at lower power. That was my point. While I agree that this implies ARM could have much better performance at matching power (say ~20W per core), no graphs actually show that. Should I take for granted that ARM will certainly scale near perfectly with higher power and frequency which eventually will lead ARM to completely destroying x86 in performance? I do not think so. I need to see that test in real world - code compile, render, compress/decompress, crypto (without out-of-the-core HW engines). Personally I think there is no ARM on the planet at this moment that can do shit in single thread against, say, ZEN2 core clocked at 4.7GHz in the mentioned scenarios, but I might be wrong.

                  I agree that ARM may be faster when it comes to some whitepaper/theory level semi-bullshit benchmarks and models, such as some arbitrary generic ISA integer instructions stream, and how it can handle them in faster/more efficient manner strictly from the perspective of core uArch, yes. But when it comes to the real world loads, first - we have no decent ARM desktop in general to validate that, and second, performance will be garbage most likely. Maybe you are right about ARM uArch stuff, like being more advanced and so on, but every time I see ARM benchmarks in this site, performance is shit every single time Sorry, but that is true.
                  You are correct but somehow confused. Arm cores can also use the same lithography and stages for 4.7Ghz speed with better real life IPC. There is no architectural wall for that as you seem to believe and also you don't get dynamics because that can be done tomorrow. How fast x86 can drop energy consumption, 5-10 years from now? Also if you did go to a proper school you would understand that CISC does not convert anything inside to RISC and cannot win again clean proper RISC at anything except programming and compiling friendliness. That is all about x86 for many years now, not some crazy super power.

                  Comment


                  • #69
                    Originally posted by artivision View Post

                    You are correct but somehow confused. Arm cores can also use the same lithography and stages for 4.7Ghz speed with better real life IPC. There is no architectural wall for that as you seem to believe and also you don't get dynamics because that can be done tomorrow. How fast x86 can drop energy consumption, 5-10 years from now? Also if you did go to a proper school you would understand that CISC does not convert anything inside to RISC and cannot win again clean proper RISC at anything except programming and compiling friendliness. That is all about x86 for many years now, not some crazy super power.
                    Please go back to school (a better one) because you're wrong. ARM cores using the same litho as x86 cores perform slower than (for example) Ryzen cores. And when you crank up the clocks on ARM cores to try to make them competetive with x86 cores, guess what? Power efficiency drops to the same levels or worse. Why ARM is always perceived as having better power efficiency is becaue they live further down the speed/power curve most of the time.

                    If you really think that litho is all that determines the speed of a processor, then you missed the chapter on pipelining. You could be forgiven for not understanding that if this were the 1960s, but four decades have passed and anyone who 'went to a good school' would understand that.

                    Comment


                    • #70
                      I do not consider myself a CPU architecture guru TBH. From my limited understanding, x86, as such, is a CISC, which actually uses RISC engine internally by splitting large x86 instructions to the uops, while ARM, itself being a RISC, implements smaller instructions at ISA level. So first, this means, that ARM's IPC basically is not comparable to x86 IPC at ISA level, since ARM's instructions do less computation on average. To match the same performance ARM IPC and/or frequency actually must be higher than x86. Now, second, this means, that ARM most likely has also to have heavier RAM accesses, which also cost a consumption. Besides all that, transistor budget for caches and additional logic would also have to be increased. Perhaps ARM can indeed do that magical extended power envelope HPC CPU tomorrow, that would destroy any x86 at any given load, but I somehow doubt that it's so simple. Correct me if I'm wrong.
                      Last edited by drakonas777; 13 September 2020, 03:47 PM.

                      Comment

                      Working...
                      X