Announcement

Collapse
No announcement yet.

AMD Radeon RX 5500 XT Linux Performance

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by F.Ultra View Post

    Why would they spend the money on HDMI 2.1 for a 1920x1080 card? The only difference between 2.0b and 2.1 is the added bandwidth for new 4K and 10K resolutions.
    Who says it's a 1920x1080 card ?
    Who says you can't hook it up to 4K monitor or TV and watch a movie on it ?
    I honestly don't buy GPUs just for games.
    I want to be able to watch movies with them with the latest quality features turned on like HDR (High dynamic range) and HFR (High frame rate).
    Also I want in the future to buy a 4K@120Hz or more when they will become available.
    Why should I need a new GPU just for this just because they chose the cheaper path of putting a very old version of the HDMI instead of a somwhat newer one?

    Comment


    • #52
      Originally posted by atomsymbol

      RTX 2060 has 10.8e9 transistors at 12nm in 445mm². RX 5700 has 10.3e9 transistors at 7nm in 251mm². But the expression sqrt(445)/sqrt(251)*7nm yields 9.32nm.
      Why are you dividing the square root of a 12nm RTX 2060 (445mm²) by the square root of the 7nm RX 5700 (251mm²) then multiplying by a 7nm process node to give you 9.32nm, that just doesn't make any sense and appears to be mathematical gobbledygook. I think you need to go and visit a basic formula for fitting carpet tiles and see where you have gone wrong and why.
      Last edited by Slartifartblast; 14 December 2019, 08:59 AM.

      Comment


      • #53
        Originally posted by atomsymbol

        It works like this: You (or somebody else) will have to suggest a better math expression with improved accuracy than the expression submitted by me, or (xor) you will have to accept my computation as the best one so far in this particular discussion.
        My suggestion is you return to kindergarten maths class, I just can't be arsed doing it for you. It's not just inaccurate it's complete crap. 0/10, try harder.

        Have a nice day

        Comment


        • #54
          Originally posted by atomsymbol

          This is called trolling.

          Do you have a university degree?
          Yes thank you, joint honours in Chemistry and Biochemistry. Now there will be no more correspondence on my part as this has degenerated into a rather tedious melodrama that I have no intention of participating in any further.

          Good day and goodbye.

          Comment


          • #55
            Originally posted by Danny3 View Post

            Who says it's a 1920x1080 card ?
            Who says you can't hook it up to 4K monitor or TV and watch a movie on it ?
            I honestly don't buy GPUs just for games.
            I want to be able to watch movies with them with the latest quality features turned on like HDR (High dynamic range) and HFR (High frame rate).
            Also I want in the future to buy a 4K@120Hz or more when they will become available.
            Why should I need a new GPU just for this just because they chose the cheaper path of putting a very old version of the HDMI instead of a somwhat newer one?
            HDR and HFR is in no way "quality features", all the movies shot above the proper 24fps (and note here that this card will support 4K60Hz) looks like ugly tv soap operas. However if that really is your fancy then yes you will have to look for a different card, even if they had added 2.1 support I have a hard time believing that the GPU on it would be able to push 48Gbps, let along decode any codec on that bandwidth.

            For anyone interested in why 24fps is better for movies, please see this Youtube by Filmmaker IQ:
            Last edited by F.Ultra; 14 December 2019, 03:45 PM.

            Comment


            • #56
              Originally posted by atomsymbol
              I would like to see an expression that is more accurate. So please post it here.
              I'm not really sure what you were trying to show with your calculation, but I don't think it made any sense.

              7nm / 12nm = 58%.
              251 mm2 / 445 mm2 = 56%.
              (251/10.3)/(445/10.8) = 59%.

              It all seems to line up pretty closely to me.

              That said, you are correct that different features take different sizes of silicon and doesn't necessarily scale directly with the # of transistors. I just don't think your calculations had any relation to that fact, if that's what you were trying to show.
              Last edited by smitty3268; 14 December 2019, 03:54 PM.

              Comment


              • #57
                IMO, it's good that they managed a 128-bit card that can hang with their previous-generation 256-bit cards, but the value is not there. RX 570 performs nearly as well, and if you look at the current sale prices, in particular, its performance per $ is way better. Plus, Polaris has no caveats around support for GPU compute.

                Over time, RX 5500 cards should come down in price, hopefully to the RX 560's bracket. However, at least until RX 570 stocks run dry, the 5500's will be challenging to recommend.
                Last edited by coder; 14 December 2019, 04:16 PM.

                Comment


                • #58
                  Originally posted by F.Ultra View Post
                  HDR and HFR is in no way "quality features", all the movies shot above the proper 24fps (and note here that this card will support 4K60Hz) looks like ugly tv soap operas.
                  It basically boils down to the same arguments of film vs. digital. Old people, stuck in their ways, and used to having a certain look. Also, I think actors don't like how HFR exposes their weaker performances.

                  I saw Gemini Man in 60 FPS HFR 3D and it looked amazing. Like looking through a window. Most action scenes in movies are a blurry mess, but the motorcycle chase scene was totally clear. Yes, I'll have some more of that, please.

                  And yes, I also use my TV's motion interpolation, when watching 24 fps content. Once you get over the initial adjustment period, a fair-minded person cannot deny that it looks better (assuming a competent implementation with minimal artifacts).

                  Originally posted by F.Ultra View Post
                  even if they had added 2.1 support I have a hard time believing that the GPU on it would be able to push 48Gbps, let along decode any codec on that bandwidth.
                  You only need 25.8 Gbps to reach 4k @ 120 Hz (8-bit), though I think you might be right that the link probably has to run at 4x12 Gbps to deliver that. Still, if you're talking about the memory bandwidth required by the RAMDAC (or whatever you call its modern equivalent), it's peanuts - just 3.2 GB/sec. The card is rated at 224 GB/sec.

                  So, the only question is whether the decoder can manage. According to this, the decode block used in Navi 10 (the 5700 cards) is only capable of 4k @ 120 Hz for H.264. At H.265, it's limited to just 4k @ 60 Hz.

                  Two of the often overlooked components of a new graphics architecture are the I/O and multimedia capabilities. With its Radeon RX 5700-series "Navi 10" graphics processor, AMD gave the two their first major update in over two years, with the new Radeon Display Engine, and Radeon Multimedia Engine. T...

                  Comment


                  • #59
                    Originally posted by smitty3268 View Post

                    I'm not really sure what you were trying to show with your calculation, but I don't think it made any sense.

                    7nm / 12nm = 58%.
                    251 mm2 / 445 mm2 = 56%.
                    (251/10.3)/(445/10.8) = 59%.

                    It all seems to line up pretty closely to me.
                    You are comparing a distance ratio (7 nm / 12 nm) to an area ratio (251 mm2 / 445 mm2). This doesn't make sense. Neither did atomsymbol's calculation:

                    Look at it this way: it's a comparison of the total size of idealised transistors on the dies to the actual sizes of the dies, the idealised size of a transistor being (7 nm)² and (12 nm)², respectively. atomsymbol took the root of each side to get a linear metric to compare to the process size (a 1-dimensional size). But he didn't take process size and number of transistors on the chips into account in his calculation which makes it rather nonsensical.

                    One could calculate a linear metric like this: (10.8e9 × 12 nm × sqrt(251))/(10.3e9 × 7 nm × sqrt(445)) ≈ 1.35. I interpret this as showing that the transistors take up a larger fraction of the total die area in the 12 nm chip. Multiply by 7 nm to see that if it was possible to just scale number of transistors per die area with (inverse) process size, the RTX 2060's die area and transistor count would correspond to a 9.45 nm process.

                    Comment


                    • #60
                      Originally posted by bitnick View Post
                      You are comparing a distance ratio (7 nm / 12 nm) to an area ratio (251 mm2 / 445 mm2).
                      I actually don't think 7nm and 12nm are distance ratios, FWIW.

                      I'm admittedly not an expert in this area, but my understanding is that these are basically marketing terms that are used to describe the process, not actual measurements. And that it's based on the average transistor density the manufacturer thinks their process can provide - presumably it's based on actual numbers somewhere back down the line on older processes. But that's why Intel's 10nm process is described as similar to TSMC's 7nm - because they don't use actual distances, so they aren't directly comparable to one another.

                      Comment

                      Working...
                      X