Announcement

Collapse
No announcement yet.

AMD Announces Ryzen 7000 Series "Zen 4" Desktop CPUs - Linux Benchmarks To Come

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by coder View Post
    Uh, we could play this game.
    ISAs Intel ditched since the 1990's:
    • i860
    • i960
    • StrongARM
    • Quark
    • IA64

    Some other notable cancellations:
    • i740
    • Larrabee & Xeon Phi

    Not sure where you got that idea. They all had clones, as did SPARC.
    And ARM definitely competes with its licensees.
    LOL, No, AMD wasn't competitive from about 2006-2017.
    The reason x86 won is simply that it had the biggest market and Intel had an incentive and the resources to sell ever-faster CPUs, year after year. The market was big enough for them to overcome most of the disadvantages of x86, at least for use in applications in laptops and above. Below that, the inherent deficiency of x86 hurt them too badly on perf/W and perf/area.
    It's basically the same story with Apple and iPhone SoCs. The iPhone market was big enough and had high enough margins for them to have the means to do it. And Steve Jobs supplied the vision and ambition at a stage most others probably wouldn't have.
    LOL, when did ARM file for bankruptcy?
    The more you write, the more you expose your complete ignorance. ARM is continuing to develop and refine their designs in many key markets. I could post links, but I take it you're not one of these "evidence-based" types. You see what you want, regardless of where the facts lead.
    They sort of compete in the PC market, since you can run Windows and Linux VMs on their computers and some people indeed buy them for that.
    In the phone market, I'd agree that most iPhone users are going to buy an iPhone pretty much no matter what. Mac has a lot of users like that, but Apple is clearly looking to grow beyond its core userbase, because when you're a $Trillion company, investors want to see you sustaining growth and they can't continue to do that on the backs of the same users.
    The main thing holding back Apple is their focus on phones and laptops. When you remove power limits, they can't scale up single-thread performance quite as high, and they have yet to do a multi-core CPU as big as AMD or Intel.
    I've heard the opposite. Intel's threat of iGPUs forced AMD to buy ATI.
    LOL what?
    Intel can & is losing market share to ARM. You think that's not an incentive?
    If anybody makes a Bulldozer, they're in trouble. Whether they have a major competitor on the same ISA or not.
    Intel was sort of in a similar boat, being stuck on 14 nm forever. They were only kept afloat by virtue of unprecedented demand that nobody else had the fab capacity to fulfill.
    Okay, I'll stop trying to confuse you with facts, as you clearly already know what you want to believe. I guess when ARM finally surpasses x86, you will simply reach for conspiracy theories to explain why.
    did you read what i did write with him for like 100 pages ?... his complete complete ignorance has no limit.
    he has complete Dunning Kruger syndrom...

    i confronted him with historical facts like 1000 times but he has a complete resistance to it.

    its really sad that these kind of people in my point of view do nearly dominate all the internet forums all around the world.
    and most of the time the real experts have no time to confront these people with real answers to push real facts.

    Phantom circuit Sequence Reducer Dyslexia

    Comment


    • Originally posted by coder View Post
      The efficiency is limited by the microarchitecture as much as the process node, and AMD showed that Zen 4 is more efficient across the range.
      They planned to do a 6 nm refresh of their desktop CPUs, but they cancelled it for reasons we can only speculate about.
      the reason is very simple and it is known for people who are insiders in the tranistor cpu/gpu market.
      AMD did order the 6nm refresh of their 5000 desktop chips in the time of the chip crisis with super high demant of monero mining is sucking up all the cpus and ethereum mining sucking up all the gpus but now after the cypto-market crash the demand cooled down.
      now all the companys who ordered extra node wavers like nvidia or amd or intel have the problem they ordered to much wavers...
      Nvidia for example do delay their RTX 4000 series because of this they first need to sell all their 3000 chips.

      AMD had the choice of stop producing 7nm chips and go with the 6nm but the zen4 cpus use 6nm IO-chip+iGPU chiplet for their 5nm cpus... and AMD now plain and simple produce more of these IO-chiplets for their zen4 cpus instead of produce 6nm ryzen5000 refresh...
      likt these _AMD 6nm GPUs like 6400/6500 the different between 6nm and 7nm is low AMD would not be able to sell these chips for a reasonable higher price than the 7nm 5000 cpus...

      but one fact is good to know AMD did not chancel any 6nm waver orders at TSMC... they just produce zen4 IO-chiplets instead.
      they also just produce their 6nm GPUs like 6500...

      2 companies are really in trouble nvidia with their RTX4000 and also intel their 6nm GPUs is a failure and not competive with AMD.

      so right now only nvidia and intel try to chancel 5/6nm wavers at TSMC ...

      Originally posted by coder View Post
      However, the 6000-series are still Zen 3. TSMC N6 is portable from N7, meaning you can port to it without having to redesign. Ryzen 6000 is still the Zen 3 microarchitecture, just like how the 12 nm Ryzen 2000 series were still Zen 1.
      Yes, and that's because TSMC N4 is still a N5-family node. It works with the same cell libraries. If AMD's laptop products weren't lagging their desktop/server CPUs, they'd be one the same node.
      However, given that they're staggered AMD can take advantage of refinements like N6 and N4 and would be foolish not to. That doesn't mean that Ryzen 5000 laptop SoCs were uncompetitive at N7 or that Ryzen 7000 wouldn't be competitive in laptops at N5.
      You are misattributing causality, but that's not a surprise.
      right now because the chip crisis is over because of the crypto mining crash...

      apple moves their orders at TSMC from 5nm to 3nm and 4nm...
      amd orders every capacity they can get from TSMC what is not at apple-level prices...

      Nvidia want to delay their 5nm orders because of the end of the chris crisis their RTX3000 stock is still to high..

      and intel is at the worst position their 6nm GPUs are not competive and they ordered to much at TSMC...

      Originally posted by coder View Post
      I don't know what you mean by that. Intel has what they call "Intel 7" and they use it to make both Alder Lake desktop and laptop CPUs.
      It's pretty hilarious to me that you think Intel can just port their IP to TSMC like that. You clearly don't have the first clue about CPU design or production. For Intel to backport Ice Lake to their 14 nm node was a major, multi-year effort for them. And that was porting from an Intel node to another Intel node.
      there are still some do it easy nodes around based on 2D planar tranistors but they end at 12nm super highend 2D node...
      port from something like 28nm 2D planar tranistor node to 12nm 2D planar tranistor node is simple...

      but all these nodes: TSMC 3nm/4nm/5nm/6nm/7nm and also intels 10nm are all highly compled 3D nodes...

      and from going from 7nm to 5nm and from 5nm to 3nm is super high... i did read an article about this the costs do explode

      and the dollar/euro numbers of just create the masks for this nodes are so high that it is complete insanity.

      we talk about billions of dollars just to start the production without any single waver output...
      Phantom circuit Sequence Reducer Dyslexia

      Comment


      • Originally posted by coder View Post
        Please show your work.
        Not only are they extremely profitable, their 2021 revenues were 4.6 times Intel's and 22.3 times AMD's. You have no concept of just how much money Apple has, and they need to invest some of it somewhere. As long as they deem their CPUs to be a strategic asset, they're going to continue pouring money into their development.
        And if AMD can compete with Intel having revenues just 1/5th as large (and I'm sure a significant part of that is Xylinx), why do you think Apple can't?
        Your logic is just fundamentally flawed, but that's been shown time and again.
        "Please show your work."

        he has no numbers or calucations to proof his words...

        "Your logic is just fundamentally flawed, but that's been shown time and again"

        in my point of view he has no fucking clue what he is talking about...
        Phantom circuit Sequence Reducer Dyslexia

        Comment


        • Originally posted by coder View Post
          Uh, we could play this game.

          ISAs Intel ditched since the 1990's:
          • i860
          • i960
          • StrongARM
          • Quark
          • IA64

          Some other notable cancellations:
          • i740
          • Larrabee & Xeon Phi
          Intel has attempted to replace x86 many times in the past, but they never stopped developing it cold turkey. Though StrongARM wasn't developed by Intel, but was acquired due to a settlement from a lawsuit. Larrabee was x86, and not entirely a CPU but also a CPU. It was an interesting project that would have been revolutionary had it not been sorta canceled. Xeon Phi was the successor to Larrabee, and also still x86.

          Not sure where you got that idea. They all had clones, as did SPARC.

          And ARM definitely competes with its licensees.
          They indirectly competed, not directly. Apple can compete against Dell and Asus but not against Intel and AMD.
          LOL, No, AMD wasn't competitive from about 2006-2017.
          That's debatable, but before 2006 the Athlon 64 was king, as was the Athlon XP.
          The reason x86 won is simply that it had the biggest market and Intel had an incentive and the resources to sell ever-faster CPUs, year after year. The market was big enough for them to overcome most of the disadvantages of x86, at least for use in applications in laptops and above. Below that, the inherent deficiency of x86 hurt them too badly on perf/W and perf/area.
          Yes but both AMD and Intel would out engineer everyone, despite the inherit deficiency of x86. Would anyone use a Power based machine over AMD's or Intel's? Of course not because clearly AMD and Intel are superior.
          It's basically the same story with Apple and iPhone SoCs. The iPhone market was big enough and had high enough margins for them to have the means to do it.
          The reality is that ARM makes it easy for anyone to make a SoC. ANYONE CAN MAKE AN ARM SOC. If you look at how many companies made them, it's not hard to see why Apple went on their own. Also the cell phones performance doesn't matter. Nobody knows how fast their cell phone is, including myself and I'm a tech enthusiast.
          And Steve Jobs supplied the vision and ambition at a stage most others probably wouldn't have.
          I wouldn't suck the dick of Steve Jobs if you knew his history. This was a man who died of a cancer that instead of getting proper treatment and maybe put his cancer into remission, he ate fruit. He just ate fruit. I was using smart phones long before the iPhone was even a concept. Symbian and Windows Mobile is what I used until Android was released. The only thing iPhone did was make it better by making a UI that didn't depend on a stylus and a capacitive touchscreen screen instead of a resistive one.
          LOL, when did ARM file for bankruptcy?
          LOL, when Softbank the company that owns them did.

          The more you write, the more you expose your complete ignorance. ARM is continuing to develop and refine their designs in many key markets. I could post links, but I take it you're not one of these "evidence-based" types. You see what you want, regardless of where the facts lead.
          You guys live under a rock or something? Nvidia nearly bought them out, but the UK government stopped them because conflict of interest and foreign company. So far I've proven you wrong, just like qarium​ who thinks WebGPU is a thing.
          They sort of compete in the PC market, since you can run Windows and Linux VMs on their computers and some people indeed buy them for that.
          They sort of do, but if Apple had 50% market share they would be a serious concern in anti consumer behavior. Running Windows and Linux in a VM isn't a healthy market. Apple makes their own chips for their own computers for their own software. Nobody else does that in the industry. They are an oligopoly.
          In the phone market, I'd agree that most iPhone users are going to buy an iPhone pretty much no matter what. Mac has a lot of users like that, but Apple is clearly looking to grow beyond its core userbase, because when you're a $Trillion company, investors want to see you sustaining growth and they can't continue to do that on the backs of the same users.
          Apple wants to grow without losing their anti-consumer practices. A lot of people, including myself, like choice when buying or building a PC.
          I've heard the opposite. Intel's threat of iGPUs forced AMD to buy ATI.
          It goes back and forth. Intel has always had GPU graphics and a chipset, which AMD had neither. They did have a chipset but it wasn't good. AMD did try to buy Nvidia, but that obviously didn't work. So they settled on ATI. Once AMD had Ryzen APU's then Intel knew they had to make better graphics. Though Nvidia also influenced Intel to make graphics since a lot of their server market was lost to Nvidia's GPU's.
          LOL what?
          It's true, look it up.


          Intel can & is losing market share to ARM. You think that's not an incentive?
          Not really no. Intel did try to get into the mobile market with x86 as I've owned a Asus Zenfone that used an Intel x86 CPU, which did work fine. But this was Intel trying to expand market share, not keep it's current market share. Losing Apple was a huge loss for Intel, but realistically Intel is having trouble supplying enough chips. This might have been one of the many reasons Apple dumped Intel, as Intel as the time refused to use anyone else but their own facilities to make their chips. It seems Intel doesn't have a problem now using TSMC as they paid for a shit ton of manufacturing time for 3nm.
          Okay, I'll stop trying to confuse you with facts, as you clearly already know what you want to believe. I guess when ARM finally surpasses x86, you will simply reach for conspiracy theories to explain why.
          Come back to me next year when AMD releases their Zen4+RDNA3 laptop chips, and let me know if I was right. You clearly think ARM is going to the moon, while I don't. Lets see who's right.

          Comment


          • You can rationalize M1/2 as long as you want - that won't change empirical evidences that in some MT heavy workloads (using cross platform native apps) M1/2 is either comparable to x86 or has even slightly worse perf/watt as shown in that Hardware Unboxed video and in another similar video by them some time ago. A metric ton of copium won't change this LOL

            Also the fact ARM has sued Qualcomm regarding NUVIA won't help much for the whole "ARM takeover/ ARM + Windows" agenda regarding high performance SKUs, you will get those mobile phone SoCs from MediaTek of course LOL

            Comment


            • Originally posted by drakonas777 View Post
              You can rationalize M1/2 as long as you want - that won't change empirical evidences that in some MT heavy workloads (using cross platform native apps) M1/2 is either comparable to x86 or has even slightly worse perf/watt as shown in that Hardware Unboxed video and in another similar video by them some time ago.
              How do we know the apps are comparably optimized?

              Also, keep in mind that even Apple's M2 still has just 128-bit NEON. Next gen will surely be ARMv9, with SVE2. My guess is probably @ 256-bit, for reasons similar to AMD's AVX-512 approach.

              Originally posted by drakonas777 View Post
              Also the fact ARM has sued Qualcomm regarding NUVIA won't help much for the whole "ARM takeover/ ARM + Windows" agenda
              That will get sorted out. It'll just cost Qualcomm some more $. It's in everyone's interest to find a resolution. The only question is at what price.

              Comment


              • How do we know the exact amount of energy x86 spends in decoding? Some of you declare x86 as legacy garbage (which is defensible position regarding ISA design), but can you be more specific? Can you give a number instead some religious-like believes that ARM is so muh so much more better? Because now it's an empty discussion basically, comparing different products from different ecosystems with different SoC/system design, software and optimizations.

                At least some whitepaper with theoretical calculations/model made by actual CPU architects would be nice. Because now it feels like some of you believe ARM is multiple times more efficient due the better ISA, when in reality it may very be that ARM is 5-15% (given all constrains/variables are the same or very similar, just difference in ISA/frontend/decoding part) more efficient or something like that LOL
                Last edited by drakonas777; 04 September 2022, 06:28 AM.

                Comment


                • Originally posted by drakonas777 View Post
                  Can you give a number instead some religious-like believes that ARM is so muh so much more better?
                  They're differences or reasons to think it might be better. However, only CPU architects at Intel and AMD know the real impact of these things.

                  Originally posted by drakonas777 View Post
                  At least some whitepaper with theoretical calculations/model made by actual CPU architects would be nice.
                  You're not going to get that in a whitepaper. You can search academic literature for some analysis, if you want. Perhaps there's been some attempts to measure or reverse engineer this stuff. However, what Intel and AMD do is highly proprietary and secret.

                  Probably the best you can find is an area estimate of the front end, although I'm not sure how easy that would be, since it probably won't all be a neat, compact block in the floor plan.
                  Last edited by coder; 04 September 2022, 09:20 AM.

                  Comment


                  • Originally posted by coder View Post
                    Not sure what you mean by "OS APIs".
                    OpenGL and Vulkan. Maybe WebGPU but I have never seen any remotely taxing OS 3D game on it.
                    I think Metal is what most iOS games use.
                    Do iOS games run on M1/2 MACs? Are there any open source iOS games?

                    Comment


                    • Originally posted by Anux View Post
                      OpenGL and Vulkan.
                      Then how is Metal not an "OS API"? I get why WebGPU isn't, because that's browser-based. However, Metal is a native GPU API on MacOS and iOS, whereas Vulkan only exists as an emulation layer running atop Metal (see: MoltenVK).

                      Originally posted by Anux View Post
                      Do iOS games run on M1/2 MACs?
                      Not a question I can answer, but it seems like they probably should.

                      Comment

                      Working...
                      X