Announcement

Collapse
No announcement yet.

Linux Prepares For Next-Gen AMD CPUs With Up To 12 CCDs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Now if we could just get better than dual and quad core processors in laptops and chromebooks and 16GB minimum RAM nowadays. Still see a lot of entry laptops/chromebooks with dual core procs and 4GB of RAM.

    Comment


    • #22
      Originally posted by smitty3268 View Post

      AMD's plan in the short term is to get 96-core server chips out next year to compete with Xeon. I don't think it's difficult to understand why they'd want to do so. AMD still has a massive advantage over Intel in this market due to their better power efficiency.
      At the same time, they're getting new 128-core server chips using strictly e-cores in order to compete with the ARM server chips.
      Then in 2023 they'll come out with their own big.little desktop system.

      How all this will compete against Intel remains to be seen, and AMD will remain a massive underdog for as long as their income is dwarfed by Intel, but they seem to be fairly well positioned for at least the next 2 years. Beyond that I don't think anyone can reasonably foresee.
      Epyc still operates in the same 180w - 300w TDP bracket as the Xeons. Anyone forking out the money for a 280w TDP server processor is hardly going to bother about electricity consumption or power efficiency.

      And the high-performance mainstream segment really needs some shaking up. A 32C64T mainstream processor with > 64GB of non-ECC memory works great as a dedicated headless build machine for personal use. There is no reason to be limited to HEDT and server hardware for such configurations.

      Comment


      • #23
        Originally posted by TemplarGR View Post
        Not that exciting. So apparently AMD will target for more performance-heavy cores. But this is inefficient for many reasons.
        What smitty3268 said. Source:

        https://www.anandtech.com/show/17055...-and-128-cores


        Originally posted by TemplarGR View Post
        Use cases that can scale with many cores are typically better off with multiple efficient cores than with fewer more performing ones, and applications that don't scale and instead benefit from few very powerful cores, don't need too many performance cores.
        It's a bit like what ARM is doing with their V-series and N-series. The former are bigger cores, designed for compute-intensive workloads and scale to fewer cores per CPU. The latter are more balanced perf/efficiency cores and scale up to higher core counts.

        https://www.anandtech.com/show/16640...us-cmn700-mesh

        Comment


        • #24
          Originally posted by Sonadow View Post
          cmake, make and ninja still do not know how to automatically scale jobs according to the number of cpu threads available and always default to building on a single thread unless -j or --parallel is passed to the build;
          Last I checked, cmake is still serial. However, it only gets run when you actually touch one of the CMakeLists.txt files. Unless your buildsystem really sucks, it's fast enough not to matter.

          ninja has defaulted to the number of hardware threads of your machine for years, by this point. I don't know how far back that goes or if it's some kind of buildtime option that maybe your distro didn't enable, but the one in my distro always auto-parallelized by default.

          make is the one where you need to use an incantation like make -j$(nproc), which you can put in an alias. Of course, my knowledge of GNU Make is a couple years old.

          As you seem to allude, one can naively run cmake --build --parallel, to autoparallelize the second stage of a CMake build.

          Comment


          • #25
            Originally posted by Sonadow View Post
            Anyone forking out the money for a 280w TDP server processor is hardly going to bother about electricity consumption or power efficiency.
            If you're burning that much power for most of the time, and having to burn even more power to on air conditioning, then it does add up to a nontrivial amount. If we take an electricity billing rate of $0.20 per kWh, assume the mean CPU utilization is about 60% of its TDP, assume 90% efficiency from the PSU, assume 90% efficiency from the VRM, assume a service life of just 3 years, and assume a PUE of 1.5, then you get $1636 as the cost of running the CPU, alone. That's not including RAM, storage, networking, or fans.

            And I think the electricity rate paid is often higher.

            Originally posted by Sonadow View Post
            A 32C64T mainstream processor with > 64GB of non-ECC memory works great as a dedicated headless build machine for personal use. There is no reason to be limited to HEDT and server hardware for such configurations.
            That's a lot of building you must do!

            Comment


            • #26
              Originally posted by Sonadow View Post

              Epyc still operates in the same 180w - 300w TDP bracket as the Xeons. Anyone forking out the money for a 280w TDP server processor is hardly going to bother about electricity consumption or power efficiency.
              It's actually the exact opposite. Power efficiency is king in most of the big data centers.

              It's not about the raw price of the power, it's about the actual power and cooling systems installed in their buildings. That's the limiting factor - the more efficient the processors are, the more of them they can pack into the same building in 1 datacenter, rather than having to build a dozen different datacenters across hundreds of miles.

              It's the workstation and HEDT markets that don't care about power use.
              Last edited by smitty3268; 24 November 2021, 02:33 AM.

              Comment


              • #27
                Originally posted by coder View Post
                That's a lot of building you must do!
                Simply building Chromium as-is as described in the Chromium build instructions with no modifications or config changes (not a developer) takes more than an hour on a 24C48T Xeon.

                Building a new mainline kernel with typical distribution .config and some added settings requires at least 10 minutes on a 32C64T Threadripper, even when built in a ramdisk.

                And unless replacing SSDs are fun, building smaller stuff like Node, Python, Rust, Firefox and LibreOffice entirely in ramdisks saves on a heck lot of read/write cycles.

                Still think 32C64T and > 64GB memory is a lot for a home-use build machine?
                Last edited by Sonadow; 24 November 2021, 02:50 AM.

                Comment


                • #28
                  Originally posted by Sonadow View Post

                  Epyc still operates in the same 180w - 300w TDP bracket as the Xeons. Anyone forking out the money for a 280w TDP server processor is hardly going to bother about electricity consumption or power efficiency.

                  And the high-performance mainstream segment really needs some shaking up. A 32C64T mainstream processor with > 64GB of non-ECC memory works great as a dedicated headless build machine for personal use. There is no reason to be limited to HEDT and server hardware for such configurations.

                  Epyc might operate in the same TDP bracket, as that is determined by the existing coolers, but the great AMD advantage is that at any given power limit the performance is much greater than of any Xeon operating at the same power limit, including the Ice Lake Server Xeons.

                  Now, with Alder Lake, Intel has regained the first position in single-thread performance and they will keep it until AMD will be able to bring Zen 4 to market, which is expected only late next year.

                  Unfortunately for Intel, even Alder Lake has much worse performance than AMD when at equal power consumption, which is not a good sign for the next Sapphire Rapids Xeons.

                  Maybe Alder Lake has a so bad energy efficiency because it is overclocked, so Sapphire Rapids might be more efficient at lower clock frequencies, in which case it might be competitive against AMD. It remains to be seen.

                  Because the power consumption of a rack is limited, for anyone who fills it with server CPUs it is certainly very important how much performance will be obtained in that power envelope, probably even more than how much money have been forked for the CPUs.










                  Comment


                  • #29
                    Originally posted by kylew77 View Post
                    Now if we could just get better than dual and quad core processors in laptops and chromebooks and 16GB minimum RAM nowadays. Still see a lot of entry laptops/chromebooks with dual core procs and 4GB of RAM.
                    It's not as bad as it used to be, but you can still find a $300 device with 4 GB of RAM. Compare to RPi 4B with 8 GB at $75. 4 GB would not be a problem if you can upgrade the RAM, which is rare for entry-level laptops and non-existent for Chromebooks. Even the soldered + 1 DIMM compromise is an improvement.

                    Quad-cores are fine for laptops. Put the Steam Deck's Van Gogh quad-core in a laptop, and you aren't going to have bad time. For Intel, they can use their 1+4 and 2+8 Alder Lake mobile for entry-level and low-TDP products.

                    Comment


                    • #30
                      All i hope is that their new consumer ryzen has about 40 pcie lanes. I just want a low cost virtualization station with two GPUs and one apu

                      But well... Won't happen

                      Comment

                      Working...
                      X