Announcement

Collapse
No announcement yet.

AMD Announces The Radeon PRO W7800/W7900 Series

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD Announces The Radeon PRO W7800/W7900 Series

    Phoronix: AMD Announces The Radeon PRO W7800/W7900 Series

    As the "world's first pro chiplet GPU", AMD today is announcing the Radeon PRO W7000 series as their first RDNA3-based professional offerings.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Does it support hardware virtualization or it's still not expensive enough?
    ## VGA ##
    AMD: X1950XTX, HD3870, HD5870
    Intel: GMA45, HD3000 (Core i5 2500K)

    Comment


    • #3
      Originally posted by darkbasic View Post
      Does it support hardware virtualization or it's still not expensive enough?
      isn't PCIE GPU virtualization a functionality of a motherboard?

      Comment


      • #4
        $3500+ ??

        Comment


        • #5
          Which parts of the professional market are they targeting with these? I thought CUDA still rules wide portions of the professional market. Also I wonder why they are targeting that market with RDNA3 and not the latest CDNA architecture.

          Comment


          • #6
            No proper ROCm support making this and RX 7900 a piece of crap.

            Users have been begging ROCm team to support the RX 7900 for half a year and nothing has happened.

            Issue Type Bug Tensorflow Version Tensorflow-rocm v2.11.0-3797-gfe65ef3bbcf 2.11.0 rocm Version 5.4.1 Custom Code Yes OS Platform and Distribution Archlinux: Kernel 6.1.1 Python version 3.10 GPU mo...

            Comment


            • #7
              Originally posted by dimko View Post
              isn't PCIE GPU virtualization a functionality of a motherboard?
              Motherboard chipset, firmware, and to lesser degree host OS support the various interfaces needed for direct virtualization pass-through, so I'm given to believe.​

              Originally posted by ms178 View Post
              Which parts of the professional market are they targeting with these? I thought CUDA still rules wide portions of the professional market. Also I wonder why they are targeting that market with RDNA3 and not the latest CDNA architecture.
              Nvidia has most of the GPU compute support sewn up. That said, there's blood in the water with the huge furor over supporting the next generations of LLM compute models. So while Nvidia is the gorilla incumbent, it's not necessarily a given they will remain so if another market player offers better performance and tools tailored for those market segments. Also, there's another compute model in the works that's kinda buried in the noise right now. High performance homomorphic encryption requires hardware that's currently not available commercially but supposedly on the drawing board. Whoever gets to that gold vein first is going to be the next hardware tech darling.

              Edit to add: What I'm getting at is with language models there's now a tools break where the traditional HPC tools aren't entirely suited for software that utilizes neural processors. So there's opportunity for Nvidia to drop the ball in the new market segment. Right now, even though PCs don't normally come with a neural processor on the client side, that's not going to be the case going forward. All Apple M processors have a neural processor. Future PCs will eventually have a neural processor of some kind on the CPU if not the GPU. That means developers need tools and they need hardware to develop those tools and models. That means the market is not currently set on a single vendor like traditional HPC is with Nvidia actually before CUDA itself existed. CUDA was a response to GPGPU that was being done at the assembly level with mostly Nvidia GPUs. Well, right now there's a similar opportunity in the neural processors and in a few years a shift in encryption computing models (which won't need quantum computing).
              Last edited by stormcrow; 13 April 2023, 01:03 PM.

              Comment


              • #8
                Originally posted by stormcrow View Post

                Motherboard chipset, firmware, and to lesser degree host OS support the various interfaces needed for direct virtualization pass-through, so I'm given to believe.​
                I'm not talking about passthrough but GPU sharing across multiple guests.
                ## VGA ##
                AMD: X1950XTX, HD3870, HD5870
                Intel: GMA45, HD3000 (Core i5 2500K)

                Comment


                • #9
                  Originally posted by nyanmisaka View Post
                  No proper ROCm support making this and RX 7900 a piece of crap.

                  Users have been begging ROCm team to support the RX 7900 for half a year and nothing has happened.

                  https://github.com/RadeonOpenCompute/ROCm/issues/1880
                  Which is not unusual at all. ROCm is designed to run on selected professional GPU'S. Developement is focused on the datacenter. Support for consumer GPU'S (Radeon) is unofficial. However since AMD has now launched two professional RDNA3 GPU's there is hope for support in ROCm. This could enabled unofficial support for the RX 7000 GPU series as well.

                  Comment


                  • #10
                    I'm waiting for entry level GPUs. I hope this series will provide any of them.
                    Last edited by MorrisS.; 14 April 2023, 08:40 AM.

                    Comment

                    Working...
                    X