Announcement

Collapse
No announcement yet.

Linux Prepares For Next-Gen AMD CPUs With Up To 12 CCDs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by avem View Post
    Family 19h is squarely Zen 3, Michael

    I guess Zen 4 will be 20h or even 21h considering AMD has yet to release Zen 3 3D/Ryzen 6000.
    Not necessarily, just as Family 17h was Zen / Zen+ / Zen 2.

    At least with MilanX for what I tried on Azure, it's the same model ID as existing Zen 3.
    Michael Larabel
    http://www.michaellarabel.com/

    Comment


    • #12
      Getting old... I think I will always sight read "CCD" as Charge Coupled Device

      Comment


      • #13
        Not that exciting. So apparently AMD will target for more performance-heavy cores. But this is inefficient for many reasons. Use cases that can scale with many cores are typically better off with multiple efficient cores than with fewer more performing ones, and applications that don't scale and instead benefit from few very powerful cores, don't need too many performance cores. Adding more performance cores is not as effective as Intel's strategy with Alder Lake and especially its successors. It is better to keep performance cores limited in number and just add multiple efficient cores. Intel gets 4 efficiency cores for every performance core they replace on the die, and it is not like the E-cores are that weak by themselves. This is an excellent approach and when schedulers mature will dominate the market i believe. I think we will also see Hyperthreading removed from Intel p-cores in the future, as it makes no sense when you have so many e-cores. This will simplify the p-cores and allow even more performance out of them.

        So what AMD expects from this move to 12 CCDs? To move their product line to more cores per price point? To sell 12 core cpus to the (mainstream) desktop? This won't make much difference, honestly. They are going to keep sacrificing per-core performance for some more cores. It is not a bad improvement but Intel is going to eat their lunch. I think we are witnessing a repeat of the AMD64 days. AMD did dominate the market for a few years based on AMD64, but Intel came back with Core/Core 2 and AMD had nothing but just dual core AMD64s to compensate. I think we are at the stage of the original Core Duo (=Alder Lake). Alder Lake's successor is going to be the Core 2 moment (and a reminder here that that successor is what Zen 4 is going to face on the market since it won't come out any time soon).

        Comment


        • #14
          Originally posted by TemplarGR View Post
          Not that exciting. So apparently AMD will target for more performance-heavy cores. But this is inefficient for many reasons. Use cases that can scale with many cores are typically better off with multiple efficient cores than with fewer more performing ones, and applications that don't scale and instead benefit from few very powerful cores, don't need too many performance cores. Adding more performance cores is not as effective as Intel's strategy with Alder Lake and especially its successors. It is better to keep performance cores limited in number and just add multiple efficient cores. Intel gets 4 efficiency cores for every performance core they replace on the die, and it is not like the E-cores are that weak by themselves. This is an excellent approach and when schedulers mature will dominate the market i believe. I think we will also see Hyperthreading removed from Intel p-cores in the future, as it makes no sense when you have so many e-cores. This will simplify the p-cores and allow even more performance out of them.

          So what AMD expects from this move to 12 CCDs? To move their product line to more cores per price point? To sell 12 core cpus to the (mainstream) desktop? This won't make much difference, honestly. They are going to keep sacrificing per-core performance for some more cores. It is not a bad improvement but Intel is going to eat their lunch. I think we are witnessing a repeat of the AMD64 days. AMD did dominate the market for a few years based on AMD64, but Intel came back with Core/Core 2 and AMD had nothing but just dual core AMD64s to compensate. I think we are at the stage of the original Core Duo (=Alder Lake). Alder Lake's successor is going to be the Core 2 moment (and a reminder here that that successor is what Zen 4 is going to face on the market since it won't come out any time soon).
          That, and the fact that much software still suck at proper multithreading makes the multicore race for anything about 8C16T practically pointless for general desktop computing.

          Libreoffice is still stuck on using only one cpu thread for everything except Calc while MS Office has had proper multithreading and multicore support for who knows how long. Running LO on anything slower than an i3-grade processor is outright frustrating to the point of being just barely usable.

          cmake, make and ninja still do not know how to automatically scale jobs according to the number of cpu threads available and always default to building on a single thread unless -j or --parallel is passed to the build; this never happens when building sln projects in Visual Studio where the build is always spread across all available threads by default. Rustc claims to be multithreaded, and yet it only occupies one cpu thread when invoked in a firefox compile.
          Last edited by Sonadow; 23 November 2021, 09:56 PM.

          Comment


          • #15
            Originally posted by avem View Post

            There once was a version which reported voltages and wattages and it did so incorrectly in too many cases, so the feature has long been removed. Zenpower and its primary fork (zenpower3) are still there but they are not really maintained either. As for cores temperatures, I don't remember any issues.

            If you're concerned about low level hw monitoring, Windows/HWiNFO64 is the only option.
            I just remember the meme "AMD has no drivers" when I see things like that.

            Comment


            • #16
              Y'all can't see the forest for the trees. All these "clouds" need lots of high performance cpu threads. It doesn't matter if LIbreOffice, Call of Duty, compilers, or anything else isn't optimized for 128 cores. What matters to them is being able to sell off some cores for some time and knowing that any task on any thread performs just as well. The end-user running poorly optimized software isn't the concern of the cloud providers. It's not their fault you didn't setup Cmake or used an inferior solution during premium, paid-for time...shit, they want you to run unoptimized solutions so you have to pay for extended runtime.

              On the desktop side AMD can start selling better APUs since its not like most non-workstation desktops need more than 6C12T. I'd go with 8C16T to mirror game consoles. Instead of giving desktops more computing cores they can give them more graphics cores where the removed 120 computing cores would otherwise be.

              Or do something similar to Intel with an 8C16T high performance CCD, an 8C16T low performance CCD, and more graphics cores where the removed 112 computing cores would otherwise be.

              Comment


              • #17
                Originally posted by TemplarGR View Post
                So what AMD expects from this move to 12 CCDs?
                AMD's plan in the short term is to get 96-core server chips out next year to compete with Xeon. I don't think it's difficult to understand why they'd want to do so. AMD still has a massive advantage over Intel in this market due to their better power efficiency.
                At the same time, they're getting new 128-core server chips using strictly e-cores in order to compete with the ARM server chips.
                Then in 2023 they'll come out with their own big.little desktop system.

                How all this will compete against Intel remains to be seen, and AMD will remain a massive underdog for as long as their income is dwarfed by Intel, but they seem to be fairly well positioned for at least the next 2 years. Beyond that I don't think anyone can reasonably foresee.
                Last edited by smitty3268; 23 November 2021, 10:57 PM.

                Comment


                • #18
                  Originally posted by Sonadow View Post

                  cmake, make and ninja still do not know how to automatically scale jobs according to the number of cpu threads available and always default to building on a single thread unless -j or --parallel is passed to the build; this never happens when building sln projects in Visual Studio where the build is always spread across all available threads by default. Rustc claims to be multithreaded, and yet it only occupies one cpu thread when invoked in a firefox compile.
                  Learn to use your tools. Make -j$(nproc) worksforme. Been using that for 15 years.

                  Comment


                  • #19
                    Originally posted by Sonadow View Post
                    cmake, make and ninja still do not know how to automatically scale jobs according to the number of cpu threads available and always default to building on a single thread unless -j or --parallel is passed to the build; this never happens when building sln projects in Visual Studio where the build is always spread across all available threads by default. Rustc claims to be multithreaded, and yet it only occupies one cpu thread when invoked in a firefox compile.
                    Put a wrapper script called "make" (for passing -j$(nproc) to /usr/bin/make) in /usr/local/bin or $HOME/bin, and make sure the directory is before /usr/bin in PATH:

                    Code:
                    #!/bin/bash
                    if [ -z "$MAKEFLAGS" ]; then
                        export MAKEFLAGS=-j$(nproc)
                    fi
                    exec /usr/bin/make "[email protected]"

                    Comment


                    • #20
                      Originally posted by TemplarGR View Post
                      Intel gets 4 efficiency cores for every performance core they replace on the die, and it is not like the E-cores are that weak by themselves. This is an excellent approach and when schedulers mature will dominate the market i believe. I think we will also see Hyperthreading removed from Intel p-cores in the future, as it makes no sense when you have so many e-cores. This will simplify the p-cores and allow even more performance out of them.

                      Alder Lake's successor is going to be the Core 2 moment (and a reminder here that that successor is what Zen 4 is going to face on the market since it won't come out any time soon).
                      Intel's cores are bloated in the first place. To the point where a half-sized Zen 4c could be similar in size to Intel's E-cores when comparing similar Intel/TSMC nodes.

                      Raptor Lake will be competing against at least a strong 16-core Zen 4 Raphael, and AMD could put out a 24-core Raphael (3 CPU chiplets) if it wants to win no matter what. Raptor Lake P-cores will not be that much better than Alder Lake, but AMD will have a significant IPC increase and models with 3D V-Cache to counter it.

                      Comment

                      Working...
                      X