Announcement

Collapse
No announcement yet.

AMD Announces The Ryzen 8040 Series Mobile Processors With Better Ryzen AI

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Judging by how fast previous generations of AMD mobile CPUs have made it into the hands of consumers, "available Q1 2024" means "purchasable Q1 3024".

    Comment


    • #12
      Originally posted by kylew77 View Post
      With Strix point coming in 2024 why buy Hawk point today?
      Buy today if you need today.

      Comment


      • #13
        I wonder how much in bribes Dell is still getting from intel to do not use these chips on their Latitude business laptops and even on their Optiplex business desktops.
        Last edited by NeoMorpheus; 08 December 2023, 11:16 AM.

        Comment


        • #14
          Originally posted by atmartens View Post
          Buy today if you need today.
          The corollary being "there's always something better, just around the corner".

          That's not to say that trying to time upgrades is futile, but it can easily be overdone.

          Comment


          • #15
            Originally posted by uid313 View Post

            I haven't heard about this AMD Dragon Range but I really hope it is good. My impression is that both Intel and AMD have had power hungry CPUs that got outperformed by CPUs from Apple.
            Apple never outperformed anyone in performance. Unplugged is a different story, but AMD's Dragon Range is actually pretty good unplugged but it really depends on the laptop. The Asus Zepherus G14 with the Ryzen 9 7940HS will perform the same unplugged with the multi-core dropping a bit in performance. Max Tech who I don't like for benchmarks, did test the M3 Pro Max against the Core i9-13900H got it's ass handed to it. Especially the RTX 4070, which nearly doubles in performance. Ignore the Geekbench scores, they are obviously in favor of Apple. Obviously the Core i9-13900H is not power efficient, but I can't find anyone who tested the Ryzen 9 7940HS against the M3. There were tests done against the M2 Pro Max, and it's nearly as efficient. Considering the M3 chips are reported to be hotter running and actually turning on the fans at high speed, it seems the ARM hype is dead. If the Intel machine is faster than an M3, then I'm sure the Ryzen 9 7940HS will be as well. AMD's 8040 series chips should be a good deal better, considering their Dragon Range chips used more power in idle due to their chiplet design, and their new Zen4c cores should also be a good deal more power efficient, assuming that's what the 8040's will come with. There's a reason why Mac sales have dropped 34% year over year.



            Last edited by Dukenukemx; 07 December 2023, 01:25 AM.

            Comment


            • #16
              Originally posted by uid313 View Post
              Meh, whatever. 🤷

              I don't really care about any AI thing on the CPU,
              I do care. It would be great to improve the inference speed for large language models while being able to use the processor's main memory.

              Comment


              • #17
                Does anybody know if the NPU has it's own memory or is it able to use the system's memory? In case of the latter, do we already know how fast the memory is connected?

                Comment


                • #18
                  Originally posted by Dukenukemx View Post
                  AMD's Dragon Range had caught up to Apple's ARM in power efficiency
                  Uh what? There still aren't any AMD laptops that have the same performance as M3 while also normalising for battery life. https://arstechnica.com/gadgets/2023...to-a-laptop/5/ and https://www.notebookcheck.net/Apple-....766789.0.html is a decent general overview but you can always dig into the weeds and do your own research.

                  Especially when the M3 pro/max is unplugged, the performance gap is quite noticeable and this is the primary usecase of a laptop
                  Last edited by mdedetrich; 07 December 2023, 04:17 AM.

                  Comment


                  • #19
                    Originally posted by oleid View Post
                    Does anybody know if the NPU has it's own memory or is it able to use the system's memory? In case of the latter, do we already know how fast the memory is connected?
                    Yeah, there's been a lot of detail published on Ryzen AI, if you go looking.

                    At some level, Ryzen AI is just VLIW DSP cores, shared SRAM, and DMA engines.







                    Source: https://chipsandcheese.com/2023/09/1...s-phoenix-soc/
                    (scroll down to section "XDNA AI Engine")


                    I think the concluding remarks of the XDNA section in the above page are particularly noteworthy:

                    "If applications take advantage of it, XDNA should let Phoenix handle AI workloads with better power efficiency than the GPU. Technically, the RDNA 3 iGPU can achieve higher BF16 throughput with WMMA instructions. However, doing so would likely require a lot more power than the more tightly targeted XDNA architecture.​"


                    In other words, XDNA or Ryzen AI is definitely about power-efficiency, not absolute performance. This is consistent with recent remarks by David McAfee, Corporate Vice President and General Manager, Client Channel Business at AMD:

                    Comment


                    • #20
                      Originally posted by Dukenukemx View Post
                      Why an ARM or RISC-V CPU? What benefit could be had from those?
                      Oh, I can answer that. Modern ARM and RISC-V both use SVE (scalable vector extensions). You could call this AVX, but done the right way. In SVE you don't have fixed vector lengths, at least on the coding level you don't have to think about this, the hardware takes care about splitting the data up in chunks matching register size. On ARM SVE registers can be implemented in 64 to 2048 bits and the RISC-V vector extension spec allows 64 to 65536 bits per register. The "MD" in SIMD just gets bigger.

                      Not to mention that most ARM and RISC-V SoCs come with a TPU today. I have so many ARM/RISC-V SBCs here which have hardware AI accelerator (TPU) ip core included, and some just work out of the box.
                      Last edited by Akiko; 07 December 2023, 04:57 AM.

                      Comment

                      Working...
                      X