Announcement

Collapse
No announcement yet.

Radeon RX 5600 XT With New vBIOS Offering Better Linux Performance Following Fix

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Radeon RX 5600 XT With New vBIOS Offering Better Linux Performance Following Fix

    Phoronix: Radeon RX 5600 XT With New vBIOS Offering Better Linux Performance Following Fix

    Earlier this week AMD launched the Radeon RX 5600 XT and as shown in our Linux launch-day review it offers nice performance up against the GTX 1660 and RTX 2060 graphics cards on Linux with various OpenGL and Vulkan games. Complicating the launch was the last-minute change to the video BIOS to offer better performance, but unfortunately that led to an issue with the Linux driver as well as confusing the public due to the change at launch and some board vendors already shipping the new vBIOS release while others are not yet. Fortunately, a Linux solution is forthcoming and in our tests it is working out and offering better performance.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Wow, that's actually a pretty substantial improvement. Basically went from "barely better than Vega 56" to "roughly as good as Vega 64" at a much lower wattage. Granted, the power consumption did spike a lot.

    Comment


    • #3
      Originally posted by atomsymbol
      • The results are indicating that the 256-bit GDDR6 memory interface in RX 5700 is unnecessarily wide and that the 192-bit memory interface in RX 5600 XT is fully sufficient for most games and benchmarks.
      192-bit is maybe enough, but 6 GB is already quite tight. Some reviewers managed to max out the 5600 XT VRAM even at 1080p (so much for "ultimate 1080p gaming"). If 12 GB models were released, those would be very interesting.

      Comment


      • #4
        Typo:

        Originally posted by phoronix View Post
        Strange Brigade is another Steam Play titlle on Linux
        Originally posted by phoronix View Post
        and even allowd the graphics card
        Also, is it possible to see high-end NVIDIA numbers for the Tomb Raider bench? AMD is too fast there.
        Last edited by tildearrow; 24 January 2020, 04:18 PM.

        Comment


        • #5
          Originally posted by atomsymbol

          In my opinion, 12 GB is too much today as far as gaming is concerned and it isn't going to be required until PS5 and Xbox Series X are released which are expected to have at least 12 GB of GDDR6.

          Because you mentioned 6 GB (which I didn't expect): Do you mean that those reviewers reached the limits of VRAM bandwidth or VRAM capacity? 192-bit vs 256-bit is about bandwidth, not about capacity.
          4K gaming drastically raises the VRAM requirements. Certain games have even used as much as 9GB on my 1080ti @ 1440p.

          The new consoles will have 16GB of GDDR6, but it will be used as system RAM as well.

          AMD really needs to adjust pricing downwards, but they are stuck. GDDR6 pricing is going to be climbing throughout the year and AMD also has to pay for all of that R&D. What we are seeing is something unprecedented: a company in fire straights is picking itself up by it's bootstraps and turning things around. The only other tech company that I remember doing that is Apple.

          Comment


          • #6
            Originally posted by atomsymbol

            Yes, but the question from memory bandwidth viewpoint is how big the working set is that the game is typically making use of in 1 second. Two main cases to consider are:
            • The game character is standing still (not changing viewpoint in the game) thus the same set of textures appears on the screen every frame
            • The game character rotates its virtual body or head 180 (90) degrees horizontally (vertically)
            Textures not in the working set can be, by definition without affecting performance, uploaded to the GPU through PCI Express and/or be stored in GPU memory in a compressed form.

            Just a note: 3840*2160*4 = 32 megabytes, which means that the display resolution as such is increasing memory consumption just by N*32, where N is a small integer number reflecting how many 4K rendering buffers the game is utilizing (at least: color buffer, depth buffer).



            I haven't encountered any reliable website which would be disclosing how many GB of memory will be in PS5 or in the next Xbox.
            That's not quite how 3D graphics work. All textures used in a game must be uploaded in to VRAM or you will get extremely poor performance. It doesn't matter if the camera is stationary or not. Latencies of PCIE are currently too high for real time texture streaming from VRAM, don't even get me started on storage.

            Then there is also the size of the textures themselves, the shaders as well as shader output, geometry, etc.

            EDIT: Also, per the DirectX 11 spec, textures can be up to 16384x16384 in size. Do the math and you'll see how quickly that VRAM can go.
            Last edited by betam4x; 24 January 2020, 07:03 PM.

            Comment


            • #7
              It seems to be that if you are confident in not needing over 6GB VRAM, you purchase a 5600XT. If you need 8GB, you find a low end 5700 for not too much more. AMD might have been worried about the products being too similar at this price/performance level, but to my mind the additional memory and bus width are sufficient segmentation.

              Also, it looks like the vBIOS results are shy of the Windows differential because of hitting different driver ceilings in the tested games, which is a real result but a different issue.

              Then there is Gamersnexus finding out that EVGA and others are using rejected RTX 2080 dies for the new 2060s released to counter the 5600XT, with substantially improved workstation performance resulting.

              All in all, this launch is much more interesting than I expected it to be. Of course, all of this needs about another $50 knocked off of it, but at least competition is having some effect.
              Last edited by Teggs; 24 January 2020, 09:53 PM.

              Comment


              • #8
                will this also affect rx 5500xt or only 5600?

                Comment


                • #9
                  Originally posted by atomsymbol
                  • Strangely, RX 5700 (which has the same GPU silicon as RX 5600 XT) consumes less power (74 Watts) than RX 5600 XT with the new BIOS (95 Watts) which most likely means that RX 5600 XT (new BIOS) voltages are by the BIOS configured to be higher than RX 5700 voltages
                  Don't forget silicon quality. Good silicon has less leakage and can perform better under the same power curve or consume less power under the same performance point. As seen on the Vega 64, a lot of people got better results by undervolting the chip than by overclocking. Setting the correct parameters within the margin of what the chip quality is capable of delivering is one of the most important things to set-up in the VBIOS. Normally, you'd tweak those numbers manually chip-by-chip (as every chip is different), but AMD doesn't have time for this, so they pick a number where 90% of the chips pass and throw the remaining 10% into a special cheap OEM-version with slightly lower clocks.

                  Comment


                  • #10
                    Does the SMC firmware contains vBIOS, or it has to be flashed separately?

                    Comment

                    Working...
                    X