Announcement

Collapse
No announcement yet.

AMD Reveals More Details Around The Radeon RX 7900 Series / RDNA3

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by WannaBeOCer View Post
    I didn’t see any mention in Igor’s research that indicates an issue with 12VHPWR when using an ATX 3.0 power supply. Only issues with the adapters. From that link you provided.
    Except...
    A fully connected 16-Pin '12VHPWR' connector on the NVIDIA GeForce RTX 4090 has melted & we also get our first Founders Edition case.


    There are quite some news about that connector even melting with ATX 3.0 PSUs.

    Originally posted by mdedetrich View Post
    Its not actually, its just been delegated to server space, i.e. an area where they actually care much more about power savings.
    You won't find any ATX stuff in real servers. It's also not that desirable to have most of power regulation on the logic board of servers, making those less hot swappable. Efficiency is important, but reliability, too. You want to be able to quick swap PSUs on the go at any time.

    If ATX12VO is a thing it's for fleets of low power prebuilt PC systems. I don't see it getting any grip in the DIY market at all.


    Comment


    • #42
      Originally posted by Paradigm Shifter View Post
      I really hope you're right, Michael, but the proof is in the pudding. Frankly, I've grown accustomed to disappointment from AMD's GPU division regarding GPGPU support. But I'd love them to prove my cynicism unfounded this time.
      Regarding GPGPU support, AMD has strangely been completely silent about the FP64 support in RDNA 3.

      Previously, the AMD GPUs had a strong advantage over NVIDIA by having a 1:16 FP64:FP32 throughput ratio, while NVIDIA earlier had a 1:32 ratio, which has been decreased to 1:64 in the RTX 3000 and RTX 4000 generations.

      After seeing the initial AMD presentation, I believed that maybe they have decreased the FP64:FP32 ratio to 1:32, like in the older NVIDIA GPUs, so a Radeon 7900 would still be faster than a RTX 4090 in FP64.

      Now, after more details have been released without any word about FP64, I believe that in RDNA 3 the FP64 execution units have been eliminated completely.

      Actually, between reducing the FP64:FP32 ratio to 1:32 or less, like in the NVIDIA GPUs and eliminating FP64 completely, I believe that the right choice is to remove FP64.

      When there was hardware support for FP64 in the 1:16 ratio, that was still useful, because it is impossible to implement FP64 FMA operations in less than 16 instructions (when taking care to increase both the exponent range and the number of bits for the fraction part, not only the latter).

      On the other hand, a software implementation of FP64 should achieve a better throughput than 1:32, so including such slow FP64 operations like in the NVIDIA GPUs is useless.

      I am only annoyed because AMD has failed to mention such an important change in their architecture. Whenever a new product is introduced, a decent vendor should present all the changes made from the previous version, not only those that can be claimed as improvements.


      While now machine learning is more fashionable, there are still enough people who would like to use a GPU for solving traditional engineering problems, where FP64 must be used. Unfortunately, the cost of doing FP64 computations on GPUs has risen steadily during the last 5 years, instead of declining. Now it has become cheaper to do FP64 computations on a CPU like Ryzen 9 7950X, instead of using a GPU, like in the past. The datacenter GPUs, to whom FP64 has been now restricted, can achieve much better performance per watt than a CPU, but their performance per dollar is much lower, so they are competitive only in a datacenter setting, where hundreds or thousands are used, so that the economies in space, power consumption and cooling justify their use.








      Last edited by AdrianBc; 15 November 2022, 08:41 AM.

      Comment


      • #43
        Originally posted by WannaBeOCer View Post
        If anything I’ll end up buying a low end ATX12VO motherboard when I upgrade.
        As long as ATX12VO is not relevant in the retail market, I don't expect distributors to carry them. Maybe a few for SIs.

        Originally posted by WannaBeOCer View Post
        That Chinese company isn’t following PCI-SIG specifications then. Intel’s data center PCIe Gen 5 GPUs are using 12VHPWR.
        That Intel is using them does not mean that they are mandatory to use? Besides professional/data center and consumer are totally different markets. Just look at Intel LGA3647, that is totally different from their consumer and even HEDT sockets.

        Originally posted by WannaBeOCer View Post
        I didn’t see any mention in Igor’s research that indicates an issue with 12VHPWR when using an ATX 3.0 power supply. Only issues with the adapters. From that link you provided.
        The issues that Igor identified are all in the 12VHPWR plug (with one small exception, which probably didn't cause any overheating). None of the issues are in the other side of the adapter or cable.

        Comment


        • #44
          Originally posted by chithanh View Post
          I know of gamingonlinux.com user statistics, they have NVidia and AMD almost equal.

          Caveats are that this is opt-in and self-reported, and it doesn't ask whether the hardware was bought before or after they started using Linux.
          Hey thanks for the link! That is a very interesting survey. The sample size is a bit small (~2k) but enough to be indicative. Also true about when the user bought the hardware, another key question would be do they dual boot at all as this can play a big part in upgrading hardware for Linux or Windows.

          Stats are the best, and I was not aware of this one!

          Comment


          • #45
            Originally posted by chithanh View Post
            That Intel is using them does not mean that they are mandatory to use? Besides professional/data center and consumer are totally different markets. Just look at Intel LGA3647, that is totally different from their consumer and even HEDT sockets.
            The PCIe Gen 5 connector is suppose to replace all the old PCIe connectors which is why it has 150W, 300W, 450W and 600W settings. Nothing to do with a data center card. It has to do with Intel following PCIe Gen 5 specifications.

            Comment


            • #46
              Originally posted by WannaBeOCer View Post
              The PCIe Gen 5 connector is suppose to replace all the old PCIe connectors which is why it has 150W, 300W, 450W and 600W settings. Nothing to do with a data center card. It has to do with Intel following PCIe Gen 5 specifications.
              Everything I can find online is basically the usual news sites regurgitating the same few slides from Intel regarding the PCI-E Gen 5 connector. PCI-SIG won't let you look at the specs unless you pay. From what I can see, the only visible difference from a consumer standpoint for these 150, 300, 450 and 600W connectors is a silkscreened label. Sure, the GPU and PSU are supposed to talk via the SENSE0 and SENSE1 pins on the 4-pin block, but I really hope I'm wrong when I foresee the "cheap Chinese charger" syndrome striking hard for people who either a) don't know better or b) are victims of prebuilders cutting corners.

              Comment


              • #47
                ​Hopefully the RX 7900 series is more efficient than the RX 6950 XT regarding actual power consumption. Looks to me like the RX 7900 XTX is going to pull 420w+ while gaming. While the RTX 4080 barely pulls 300w.

                https://www.techpowerup.com/review/n...dition/39.html

                Comment


                • #48
                  Originally posted by WannaBeOCer View Post
                  ​Hopefully the RX 7900 series is more efficient than the RX 6950 XT regarding actual power consumption. Looks to me like the RX 7900 XTX is going to pull 420w+ while gaming. While the RTX 4080 barely pulls 300w.

                  https://www.techpowerup.com/review/n...dition/39.html
                  Looks to you based on what? Intuition? LOL

                  Most of the Linux users do dot give a fuck about CUDA, rendering, encoding etc. All they want is for distribution they use to work reliably with the GPU of choice. Simple as that. For example, I am embedded Linux developer and I had to switch Geforce to Radeon merely because my remote desktop was artifacting and stuttering with NVIDIA driver while works perfectly fine with radeon. I use GPU only for the monitor for my workstation. Same story goes for developers, sysadmins which uses GPUs basically to interact with GUI and that's it. Nobody cares about CUDA if your GPU can't function with Arch and wayland normally LOL

                  Comment


                  • #49
                    Originally posted by WannaBeOCer View Post
                    ​Hopefully the RX 7900 series is more efficient than the RX 6950 XT regarding actual power consumption. Looks to me like the RX 7900 XTX is going to pull 420w+ while gaming. While the RTX 4080 barely pulls 300w.

                    https://www.techpowerup.com/review/n...dition/39.html

                    In the review that I have seen, RTX 4080 had exactly the same typical and peak power consumptions as the old Radeon 6900 XT (obviously while being faster).

                    A typical power consumption of 300 W for RTX 4080 provides an estimated power consumption of around 390 W for a Radeon 7900 XTX (which has more arithmetic units and a larger DRAM memory), supposing that they have about the same energy efficiency.

                    If the energy efficiency is worse for 7900 XTX, it might consume above 400 W, typically, or even as much as the old 6950 XT (around 430 W), but in any case most of the difference in power consumption will be from being a bigger GPU than RTX 4080, with a correspondingly higher performance.

                    The 7900 XT should be similar in power consumption and performance with RTX 4080 (while being much cheaper).

                    Comment


                    • #50
                      Originally posted by drakonas777 View Post

                      Looks to you based on what? Intuition? LOL

                      Most of the Linux users do dot give a fuck about CUDA, rendering, encoding etc. All they want is for distribution they use to work reliably with the GPU of choice. Simple as that. For example, I am embedded Linux developer and I had to switch Geforce to Radeon merely because my remote desktop was artifacting and stuttering with NVIDIA driver while works perfectly fine with radeon. I use GPU only for the monitor for my workstation. Same story goes for developers, sysadmins which uses GPUs basically to interact with GUI and that's it. Nobody cares about CUDA if your GPU can't function with Arch and wayland normally LOL
                      Based on TechPowerUp’s extensive power consumption testing. A RX 6950 XT has a TDP of 335w but uses 391w while gaming. A RX 7900 XTX has a TDP of 355w and if it follows RDNA2 that will put its power consumption at 400w+ while gaming.

                      I’m not sure why you’re adding your two cents when this thread is about a $1000 GPU. Everything you described can be done on an Intel iGPU. If you’re buying a high-end consumer GPU you’re either using it for gaming, content creation, computation or ML. This article is about the RX 7900 XT/XTX. Which will compete with the RTX 4070 Ti/4080.

                      As a HPC sysadmin I’m not sure what you mean care about CUDA and then talking about wayland? All of the CUDA accelerated content creation apps are still using X. While 99% of everything else is run through a scheduler/container using a terminal. No GUI is needed.
                      Last edited by WannaBeOCer; 16 November 2022, 04:30 AM.

                      Comment

                      Working...
                      X