Announcement

Collapse
No announcement yet.

Radeon RX 5600 XT With New vBIOS Offering Better Linux Performance Following Fix

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    It seems to be that if you are confident in not needing over 6GB VRAM, you purchase a 5600XT. If you need 8GB, you find a low end 5700 for not too much more. AMD might have been worried about the products being too similar at this price/performance level, but to my mind the additional memory and bus width are sufficient segmentation.

    Also, it looks like the vBIOS results are shy of the Windows differential because of hitting different driver ceilings in the tested games, which is a real result but a different issue.

    Then there is Gamersnexus finding out that EVGA and others are using rejected RTX 2080 dies for the new 2060s released to counter the 5600XT, with substantially improved workstation performance resulting.

    All in all, this launch is much more interesting than I expected it to be. Of course, all of this needs about another $50 knocked off of it, but at least competition is having some effect.
    Last edited by Teggs; 01-24-2020, 09:53 PM.

    Comment


    • #12
      will this also affect rx 5500xt or only 5600?

      Comment


      • #13
        Originally posted by atomsymbol View Post
        • Strangely, RX 5700 (which has the same GPU silicon as RX 5600 XT) consumes less power (74 Watts) than RX 5600 XT with the new BIOS (95 Watts) which most likely means that RX 5600 XT (new BIOS) voltages are by the BIOS configured to be higher than RX 5700 voltages
        Don't forget silicon quality. Good silicon has less leakage and can perform better under the same power curve or consume less power under the same performance point. As seen on the Vega 64, a lot of people got better results by undervolting the chip than by overclocking. Setting the correct parameters within the margin of what the chip quality is capable of delivering is one of the most important things to set-up in the VBIOS. Normally, you'd tweak those numbers manually chip-by-chip (as every chip is different), but AMD doesn't have time for this, so they pick a number where 90% of the chips pass and throw the remaining 10% into a special cheap OEM-version with slightly lower clocks.

        Comment


        • #14
          Does the SMC firmware contains vBIOS, or it has to be flashed separately?

          Comment


          • #15
            Any 1440p or 2160p Linux benchmarks available?

            Comment


            • #16
              Originally posted by mb_q View Post
              Does the SMC firmware contains vBIOS, or it has to be flashed separately?
              It's separate from VBIOS - the SMC firmware goes into the initramdisk so the driver can load it, while the VBIOS is flashed into EEPROM.

              Comment


              • #17
                Originally posted by atomsymbol View Post
                In my opinion, 12 GB is too much today as far as gaming is concerned
                I think you can never have too much. The extra VRAM isn't going to be detrimental in any way.

                Originally posted by atomsymbol View Post
                and it isn't going to be required until PS5 and Xbox Series X are released which are expected to have at least 12 GB of GDDR6.
                I think it will not be required then either. You can still get along with 4 GB VRAM at the end of this console generation, just don't max out the settings that impact VRAM usage.

                Originally posted by atomsymbol View Post
                Because you mentioned 6 GB (which I didn't expect): Do you mean that those reviewers reached the limits of VRAM bandwidth or VRAM capacity? 192-bit vs 256-bit is about bandwidth, not about capacity.
                Yes, they tested games which are known heavy on VRAM use and were able to provoke situations where it ran full and frametimes became uneven.

                Comment


                • #18
                  Originally posted by atomsymbol View Post
                  In my opinion, 12 GB is too much today as far as gaming is concerned
                  Originally posted by chithanh View Post
                  I think you can never have too much. The extra VRAM isn't going to be detrimental in any way.
                  Extra VRAM can increase:
                  • PCB complexity
                  • Power consumption
                  • Temperature
                  • Cost
                  • Access latency
                  Originally posted by atomsymbol View Post
                  and it [12 GB] isn't going to be required until PS5 and Xbox Series X are released which are expected to have at least 12 GB of GDDR6.
                  Originally posted by chithanh View Post
                  I think it will not be required then either. You can still get along with 4 GB VRAM at the end of this console generation, just don't max out the settings that impact VRAM usage.
                  There is a much better way of looking at it:

                  It will be required in the sense that, in the middle&end of PS5/XboxNext's lifespan, most mid-range GPU designs sold will have at least 12 GB of memory.

                  The fact that the user has at any point in time the free will to select lower resolution textures is meaningless in respect to what is actually going to happen to GPUs sold in the future.
                  Last edited by atomsymbol; 01-26-2020, 09:28 AM.

                  Comment


                  • #19
                    Originally posted by atomsymbol View Post
                    Extra VRAM can increase:
                    • PCB complexity
                    • Power consumption
                    • Temperature
                    • Cost
                    • Access latency
                    Except for cost, not really. The cards would just use double capacity chips like they do on the 5500 XT 4 GB vs 8 GB.

                    Comment


                    • #20
                      Originally posted by chithanh View Post
                      Except for cost, not really. The cards would just use double capacity chips like they do on the 5500 XT 4 GB vs 8 GB.
                      You moved to talking about 4&8 GB GPUs, while the discussion was about 12+ GB.

                      Current generation [AMD] GPUs, with max 8 GB of memory, can be expected to be optimized for this memory size. Going from 4/8 GB to 12/16 GB, assuming that the number of GDDR6 chips remains the same, means that there needs to be an extra wire on the PCB enabling the GPU to use the larger physical address space, if there are 4 GDDR6 chips on the PCB it means 4 extra wires. The other option, that is: using 8 GDDR6 chips on the 16 GB model, means that the GPU will have a different and more complex PCB than the 8 GB model which was using 4 GDDR6 chips.

                      It would be helpful if you could find out how the power consumption of Samsung's K4Z80325BC-HC14 (256M x 32) compares to the power consumption of K4ZAF325BM-HC14 (512M x 32).

                      Comment

                      Working...
                      X