Announcement

Collapse
No announcement yet.

Arm Cortex-X3 + Cortex-A715 Announced As Second-Gen Armv9 CPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by coder View Post
    No, Broadwell was their first 14 nm product and didn't launch until Q1 2015.
    You're right, they used 22 nm as early as Q4 2013 on Silvermont but at that point it was allready a year old, previously they used 32 nm on the ATOM cores.

    It's well-known TSMC exaggerates their node names, so what they call 20 nm surely isn't better than Intel's 22 nm.
    Better in what category, at density Intel probably won by a factor but I doub't they had better efficiency. Intels 22 nm was developed for high clocking designs and TSMCs was developed for low power designs. Although I can't find any compareable numbers, so its just my believes.

    You're essentially saying the CPU can work around the limited set of ISA registers through renaming, but that's forcing it to do more work at runtime by looking further out for unblocked instructions.
    Yes but thats what all high-end ARM designs are doing anyway so no real difference to x86.

    Considering your turnover point, there is one possibility: if MS released something compareable to Rosetta for Windows and Apple class ARM chips are aviable for notebook manufracturers then it could happen in a few years, not because ARM is superior but simply because its probably cheaper. If that doesn't happen we will see x86 still dominating Windows based machines in 20 years.

    Comment


    • #82
      Originally posted by Anux View Post
      Yes but thats what all high-end ARM designs are doing anyway so no real difference to x86.
      By restricting the ISA registers, you're forcing the CPU to use a larger reoroder window to find the same amount of concurrency. So, it's not the same as an ISA with more GPRs.

      The real answer would come from an experiment to recompile GCC or clang by restricting the set of available registers. Then, run a benchmark suite and see how it affects performance. Unfortunately, I don't have time for this.

      Originally posted by Anux View Post
      Considering your turnover point, there is one possibility: if MS released something compareable to Rosetta for Windows
      They've had this for a while, but the version supporting 64-bit is only available in Windows 11. Qualcomm confirmed that was done for marketing reasons, not technical ones.

      Originally posted by Anux View Post
      and Apple class ARM chips are aviable for notebook manufracturers
      Obviously not, but Nuvia-based Qualcomm chips will be sampling towards the end of this year. Nuvia was formed by several of the the designers of Apple's CPUs and they claim to have perf/W even exceeding Apple's.

      Originally posted by Anux View Post
      then it could happen in a few years, not because ARM is superior but simply because its probably cheaper.
      Perf/$ is obviously relevant, but Qualcomm seems intent on positioning Snapdragon-based laptops as a premium product. I think perf/$ matters, but not as much in laptops as perf/W and single-threaded perf. Qualcomm is making a big thing about battery life and 5G connectivity, the latter of which is probably a relevant point for how cloud-centric some enterprises are becoming.

      Comment


      • #83
        Originally posted by t.s. View Post
        Again,
        - non-upgradeable vs upgradeable
        - linux vs non-linux (he want to use it for openstack)
        - you still can slash price with buying separate mobo + psu than deskmini x300
        - Mixtile blade 3 more expensive than these
        - why not buy HK Odroid N2+ if you want cheaper product?
        - or raspberry pi zero, cause it's ok using slower CPU.
        And stop pretending doesn't know english and using broken english.
        i did not want to make an comparison who is exactly the same. because thats near to impossible.
        everything you say is right and i have no problem with that.
        but for the 99% normal users most of your point simple do not matter.
        the stuff i linked is with monitor smartphone size and tabled size...
        this means it is hard to believe that your solution comes out cheaper in whatever way.
        It is maybe the right solution for an expert like you but for the masses this does not count.

        it just shows one thing: ARM is competive in price and even cheaper. it does not have the TOP performance but they do not even want this title they aim for performance per watt and performance per dollars and the TOP performance is not important for them. and also not important for the masses.

        "And stop pretending doesn't know english and using broken english"

        I am a german with Dyslexia ... sorry
        Phantom circuit Sequence Reducer Dyslexia

        Comment


        • #84
          Originally posted by coder View Post
          It's interesting that you mention Infinity Fabric, because UCIe is implicitly compatible with that (i.e. by virtue of supporting proprietary protocols) and AMD is on the UCIe standards committee. So, there's an example of where Intel derived more benefit by having AMD (among others) on board to establish an ecosystem and avoid a competing standard emerging, than by trying to shut them out.
          When it goes, I think it'll go quickly. There will likely be a tipping point, rather than a slow, steady shift like we've seen to date.
          Windows is on ARM, to the extent it's even relevant to the question. Once the corporate IT community gets more comfortable with that and Qualcomm launches their Nuvia-based products, the premium business laptop market might switch rather suddenly.
          The cloud is mostly Linux, and the ARM transition there is already underway. So, the last bastion of x86 is likely to be gaming desktops and workstations. You can already buy Ampere-based workstations. I wonder if Nvidia could offer a more complete ARM-based workstation line than what they've so far done for the x86 market, with their DGX station. When Grace launches, they'll have some real incentive to do so.
          Well, if the transition picks up pace from here, then it's a foregone conclusion it'll be ARM and not RISC-V. If x86 can hold most of its market share for like another 5 years, we might be seeing RISC-V achieve dominance instead.
          The biggest threat to ARM is probably the uncertainty created by Nvidia's acquisition attempt, and then the subsequent budget cutting they've had to do since. Another is the way RISC-V is eating into their embedded market share. If ARM can't muster the resources needed to design properly competitive cores and keeps trying to repurpose a couple microarchitectures to tackle all markets from phones to servers, then x86 could hang on for longer. But if AMD or Intel releases CPUs based on their own ARM cores, then the sun will have truly set on x86 and ARM's dominance is virtually assured.
          What is x86 fundamentally better suited for and why? The only answer I can see is legacy code.
          I believe in tipping points.
          he did not respond to my google news references that the x86 market is already going down...

          it is a mix of high consumer energy prices what force them to buy power saving technologies and the factor of good enough and fast enough but cheaper ARM tech...

          i admit your tipping point theory i also believe in fast switches instead of his theory of 15 years...

          as soon as the masses of the markt is satisfied with the good enough and fast enough arm chips with power savings the x86 market will collapse very fast.

          the point in reality is this: for x86 the steam deck is the best case szenario the only agaist the trent x86 going mobile.. then they play games and only have 4hours of battery time

          and at the same time people play similar games on apple M2 and have 10-15 hours of battery time.

          this means x86 will die in the notebook/laptop market very fast as soon as the masses get a good enough and fast enough cheap arm product..
          Phantom circuit Sequence Reducer Dyslexia

          Comment


          • #85
            Originally posted by qarium View Post
            the point in reality is this: for x86 the steam deck is the best case szenario the only agaist the trent x86 going mobile.. then they play games and only have 4hours of battery time

            and at the same time people play similar games on apple M2 and have 10-15 hours of battery time.

            this means x86 will die in the notebook/laptop market very fast as soon as the masses get a good enough and fast enough cheap arm product..
            I don't worry too much about Apple, because most Mac buyers are going to buy one as long as it's not utter trash. And most Windows people who haven't already switched probably aren't about to.

            For me, a lot hinges on Qualcomm/Nuvia and how good their SoC turns out to be. Because Qualcomm already has a couple big OEMs on board, and with a highly competitive alternative to x86, they can really move markets.

            Of course, Qualcomm could shoot themselves in the foot by demanding unrealistic pricing. People aren't going to abandon x86 for something more expensive that's not really a lot better. But, if it's comparably fast, has better battery life, and is even a little bit cheaper, then they probably have all the ingredients needed to gain some real market share.

            Comment


            • #86
              Originally posted by qarium View Post
              the point in reality is this: for x86 the steam deck is the best case szenario the only agaist the trent x86 going mobile.
              Its an old CPU design on an old node, I wouldn't call that best case.

              then they play games and only have 4hours of battery time

              and at the same time people play similar games on apple M2 and have 10-15 hours of battery time.
              On the M1 playing games reduces the runtime to 5 h, M2 doubling that sounds a little bit to good. You have a link?

              Comment


              • #87
                Originally posted by Anux View Post
                On the M1 playing games reduces the runtime to 5 h, M2 doubling that sounds a little bit to good. You have a link?
                5hs while playing games sounds really good tho. Maybe it's because most laptops I had were more on the budget side, but those lasted 3-5hs during light usage. I never played on battery power but I expect an hour or less of autonomy.

                Comment


                • #88
                  Originally posted by sinepgib View Post

                  5hs while playing games sounds really good tho.
                  Not if you compare the M1 with 50 Wh battery to the Steamdeck that gets 4 h out of 40 Wh while beeing "inferior" in all hardware aspects.

                  Comment


                  • #89
                    Originally posted by Anux View Post
                    Not if you compare the M1 with 50 Wh battery to the Steamdeck that gets 4 h out of 40 Wh while beeing "inferior" in all hardware aspects.
                    Interesting. How does it achieve that? It does look more or less the same adjusted by battery capacity.

                    Comment


                    • #90
                      Originally posted by Anux View Post
                      Not if you compare the M1 with 50 Wh battery to the Steamdeck that gets 4 h out of 40 Wh while beeing "inferior" in all hardware aspects.
                      Steam Deck isn't a laptop, though. The benchmark against which it should be measured is Nintendo Switch, perhaps also gaming on flagship smart phones/tablets, and maybe we can even look back to older handheld gaming systems, like Nintendo DS and PSP?

                      I have no idea about any of these systems. The last handheld gaming system I used for any amount of time was a Nintendo Gameboy. Yes, the original monochrome version.

                      Comment

                      Working...
                      X