Announcement

Collapse
No announcement yet.

Jon Masters Leaving NUVIA, Returning To Red Hat

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Jumbotron View Post
    Jon probably realized that Apple's A14 SoC that's coming out in October for the iPhone and iPad and the higher performance version of the A14 for the first ARM powered Macbook would already be 90% of the performance delta of Nuvia's chip at a fraction of the cost.

    Plus the future is ARM on Mac and Windows. Jon probably wants to continue helping Red Hat/IBM make ARM the future of Linux which will have a bigger impact that a Silicon Valley "Unicorn" like Nuvia. I wish Nuvia well. The more ARM and its ISA is pushed and refined the better at breaking the ever more stale and rickety x86 hegemony so we can FINALLY get A.I. driven Silicon in our hardware that's BOTH powerful AND power efficient. You're never getting both with x86. Never.
    What the hell is "A.I driven Silicon"?

    And an ISA has nothing to do with neither powerful or power efficient today.
    Atleast for anything resembling a modern CPU atleast.
    ISAs has stopped having an effect on power efficiency since... like. 15 years? Atleast.
    The front end decode into the macro-opped fused VLIW-whatever backend is more or less the same work for everyone.
    The ARM ISA is old as shit and broken, just as x86 is by todays standard. Nobody cares.
    Because no one relies on the front end like 25-30 years ago.

    CPUs compute power and power efficiency is largely determined by other metrics.
    Power consumption, spent transistors, clock frequency and fabrication process.
    If you spend x power, y transistors, clock them the same and fabricate them in z process it is going to be more or less the same performance, be it x86 or ARM.
    ASIC teams spend budget on constructions are limited. There is no magic sauce because your front end has an ARM acronym.

    You can't halve any of these metrics while keeping the other thinking you're going to out-compete everybody else.
    5W CPU will never compete with a 100W CPU if spending the same transistors, same fabrication process etc. Something has got to give.
    A 5W CPU will be absolutely slaughtered by anything spending 100W in the same fabrication technology.

    AMD and Intel CPU's are as different on the inside as Intel vs ARM CPUs or AMD vs ARM cpus on the inside.
    The ISA frontend decode work is just a SPECK on the transistor usage budget.

    So. ARM can compete on high end servers. But they will be spending the same power as x86 to do so.
    And x86 can compete on low end battery devices. But they will be as slow as ARM while doing so.
    It's just a question of implementation.

    No magic sauce.

    Comment


    • #22
      I love how one man's personal preference to return to his former employer sets off all kinds of irrelveant theories and debates. Instead of trying to read between the lines and connect the dots that aren't there, how about just taking the guy at his word and not seeing his decision as some sort of industry microcosm?

      Comment


      • #23
        Originally posted by DanL View Post
        I love how one man's personal preference to return to his former employer sets off all kinds of irrelveant theories and debates. Instead of trying to read between the lines and connect the dots that aren't there, how about just taking the guy at his word and not seeing his decision as some sort of industry microcosm?
        +1 thank you.

        Comment


        • #24
          Originally posted by starshipeleven View Post
          And this is where you are wrong, as usual.
          As usual you don't know what you are talking about. Having observed more that a few moves like this over the years, it is almost most certainly a case of an individual trying to avoid a steaming heap of crap.

          High profile individuals are expensive, and keeping them after their job is done does not give any benefit. So they jump from a company to the next after their work is done.

          This is especially true for a startup, that can't waste money if they want to survive. This guy isn't at the scale of the chip design guy that jumped between AMD (and created Ryzen), Tesla, and Apple (and created their ARM cores) among others and will jump again at the end of his current job, but it's still high profile enough to be expensive for a smaller company that isn't even profitable yet.

          I wouldn't be surprised to know that there was an arrangement with Red Hat, and they let NUVIA borrow this guy for a while and then take him back at the end.
          A company with good relationships with its employees would leave the door open. No secret agreement is needed.

          Comment


          • #25
            Originally posted by Jumbotron View Post


            LOL...can ALWAYS count on starSUCKeleven to shitpost his enormous ignorance of tech and silicon trends masked behind arrogance and self perceived knowledge. Much like the present day President of the United States.
            Actually I think the president has more reasoning behind his points of view than Mr. starSuck. At the very least the president wants America to meet the challenge of Covid in positive way, the rest of America want people to live in fear.

            Comment


            • #26
              Originally posted by DanL View Post
              I love how one man's personal preference to return to his former employer sets off all kinds of irrelveant theories and debates. Instead of trying to read between the lines and connect the dots that aren't there, how about just taking the guy at his word and not seeing his decision as some sort of industry microcosm?
              It would be nice if you could take corporate statements at face value, but public personnel communications these day is always guarded and non specific. There have simply been too many dirty, at times illegal, statements made by companies related to personnel, that often the only info you can get these days are dates worked. For higher level employees any public statements are often vetted by legal.

              Comment


              • #27
                Originally posted by wizard69 View Post
                It would be nice if you could take corporate statements at face value..
                But this looks to be Jon's personal blog rather than a corporate statement (by either Nuvia or RH). Sorry, but I'm not buying your "steaming heap of crap" conspiracy theory. I will buy that your conspiracy theory is a heap of steaming crap though.

                Comment


                • #28
                  Originally posted by milkylainen View Post

                  So. ARM can compete on high end servers. But they will be spending the same power as x86 to do so.
                  And x86 can compete on low end battery devices. But they will be as slow as ARM while doing so.
                  It's just a question of implementation.

                  No magic sauce.
                  So why did Intel fail to conquer the mobile space then? They burned quite a lot of money trying to bring x86 into phones, but they failed miserably. Of course the ISA is not 100% alone to blame for this, but x86 implementations need to deal with highly complex old tech which could hurt their competetiveness with ARM implementations. Jim Keller saw some value in reducing the complexities but I wonder if he had enough time at Intel to fix this (e.g. to invent a x20 ISA which deprecates old cruft within x86).

                  Comment


                  • #29
                    Originally posted by milkylainen View Post
                    5W CPU will never compete with a 100W CPU if spending the same transistors, same fabrication process etc. Something has got to give.
                    A 5W CPU will be absolutely slaughtered by anything spending 100W in the same fabrication technology.
                    Funny then how this graph shows how a 4.5W core slaughters high-end x86 cores using 4-5 times as much power. How do you explain that exactly? Similarly, Graviton 2 shows that at 110W you can beat every Intel server in existence and compete with EPYC 2 using less than half the power and a third of the silicon area.

                    Is your point that AMD/Intel are totally incompetent and despite decades of experience still don't know how to design good CPUs? Arm is tiny compared to Intel and manages to design about 5 new microarchitectures per year, of which ~2 are wide, high-end OoO designs showing 20-30% performance gains. Every year. So is it possible that the x86 ISA is a tiny bit more complex than you're claiming?

                    Comment


                    • #30
                      Originally posted by ms178 View Post

                      So why did Intel fail to conquer the mobile space then? They burned quite a lot of money trying to bring x86 into phones, but they failed miserably. Of course the ISA is not 100% alone to blame for this, but x86 implementations need to deal with highly complex old tech which could hurt their competetiveness with ARM implementations. Jim Keller saw some value in reducing the complexities but I wonder if he had enough time at Intel to fix this (e.g. to invent a x20 ISA which deprecates old cruft within x86).
                      Because Intel was not / is not used to design for the mobile space. x86 CPUs has traditionally a completely different SoC layout with accelerators and whatnot.
                      Again. Nothing to do with the ISA. x86 design methodology for SoC on PCBs was a PITA. Very large memory interfaces, IO-pins, accelerators, SPI, I2C etc, etc- Shitloads of pins, southbridge, external phy's, etc, etc. How many ARM embedded CPUs support 40-128 PCIe I/O lanes that are still "battery"? None.

                      A server/desktop CPU shrunk to Embedded is usually a bad fit.
                      Esp. if the parent company thinks it can just "shrink stuff" and off we go. It has nothing to do with the ISA.
                      Just as ARM were bad fits for servers and desktop, x86 was a bad fit for embedded. Does not mean that ARM ISA can not do servers, or vice versa.

                      Comment

                      Working...
                      X