No announcement yet.

AMD EPYC Rome Still Conquering Cascadelake Even Without Mitigations

  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Now that AMD doesn't have to finance its own fabs anymore, I would say they are in a better way now than previous CPU attempts.

    I am not worried about them getting more market share than Intel, I want them to get adequate market share that will continue to finance their R&D.

    Intel continues to finance the R&D of its nodes as well as its CPU cycles. This close attachment served them well as long as they could meet their dates and output.

    In this case it caught them and allowed enough time for AMD to get momentum.

    If Intel 10nm node met its dates and volumes, the conversation would be a little different today.

    This is stuff they will be reading about in business schools in the next 5-10 years.


    • #12
      Originally posted by valici View Post

      I don't think that wafer capacity is an issue here. If Google/Amazon/Microsoft want to buy big quantity, AMD can plan ahead with TSMC.
      Let's not forget that AMD sells a lot of Ryzen chips (much more than server chips), they can always use that capacity for EPYC chips since they are both 7nm wafer.
      The issue is buying the confidence of the big server players. But I don't see why not. Now it's just an waiting game.
      it isn't just they are both 7nm, they are the same chiplets in both, so they could divert units produced from one to the other product line. The gating factor could be the IO dies however, as they are unique between product lines. Also, based on the way the Rome processors are setup, they may even be able to use rejects the consumer side can't even use. They have rome SKU's that have as few as two cores active. As long as two cores on the chiplet can meet the base/boost clocks and the full cache is available, it looks like they are usable.


      • #13

        Originally posted by phoronix View Post
        it still requires the __user pointer santisifcation,


        • #14
          Originally posted by andyprough View Post
          One big problem in the past for AMD was they didn't have the wafer fab capacity to compete directly with Intel. Is there any reason to believe they have closed that gap at all? I don't keep up with the wafer fab press articles, but I'm pretty sure Intel has continued to build and tool new fabs and their capacity is probably as high as it's ever been. AMD can have the best chips ever, but if they can only produce 10%-20% as many chips as what Intel can, they won't be knocking Intel off its perch anytime soon. Correct me if I'm wrong.
          I think it depends on what your expectations are. AMD isn't going to start outselling Intel anytime soon, they don't have the capacity for that. But even just grabbing 20% of the server market would be huge and provide them billions for future R&D. Considering they were at ~0% not long ago, that should be considered a spectacular success.

          Intel has helped by failing miserably at getting their 10nm process fabs going, and there have been a lot of articles about how Intel shortages are holding the market back for the last year or so.


          • #15
            Thanks Michael for including the single threaded benchmark as well, great to see that while you have to sacrifice some single core speed for the highly concurrent speed of Zen2 vs a Xeon Platinum, it does show that it's a very small penalty to pay.


            • #16
              Originally posted by ebrandsberg View Post

              This seems like a comment intended to spread fear about what AMD is or is not doing to safeguard the chips. If you are saying AMD is not fixing their chips in a timely manner, can you provide an example? In many cases, AMD hasn't need to fix something because they just weren't vulnerable in the first place.
              He is a senior member? I can only assume its a joke...anyway, amd can probably introduce the same security faults in their chips to maintain compatibility.


              • #17
                It don't know if I'm interpreting the benchmarks correctly, but it feels like a lot of the benchmarks are really comparing latency, not throughput, even when it's labeled so. What I mean is, every benchmark is performed by throwing the entire system at it, and effectively trusting that the software is perfect at parallelization. While this may be a fair assumption for client software like Photoshop or game benchmarks, I feel it's a little limiting for server workloads?

                I mean, if a server is running some SQL-based DB program, it may not dedicate all of its processing power to a single query at any given time. It may be running two or more independent queries in parallel. Do the existing benchmarks capture this type of common scenario for server workloads? If not, have you considered running, say, two benchmarks in parallel on a 64-core Epyc 7002 vs two benchmarks in series on a 28-core Xeon Platinum? (Obviously this may not be the optimal throughput configuration, but at least a step in that direction.) This is probably more performance-to-TCO-relevant?