Announcement

Collapse
No announcement yet.

AMD's Marek Olšák Lands Even More OpenGL Threading Improvements Into Mesa 20.1

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by abott View Post
    If money is the only thing you care about, sure. But performance to actual performance is the topic being talked about, and AMD loses.
    I have to ask - when did TFLOPS become the only predictor of graphics card performance ? If you want to use it in a compute scenario that's fine, but there are a lot of other hardware blocks that have an equal impact on graphics throughput.

    We have historically over-provisioned compute a bit relative to the other blocks because we believe that compute will continue to play an increasingly important role in game and workstation graphics, and IMO that has helped our hardware to stay relevant longer than that of our competitors, but I don't think anyone expects that extra compute throughput to translate immediately into graphics benchmark numbers.

    The ratio between graphics performance and peak compute throughput has changed significantly between Vega and Navi, but that is a consequence of HW provisioning changes not driver changes.
    Test signature

    Comment


    • #52
      Originally posted by abott View Post

      If money is the only thing you care about, sure. But performance to actual performance is the topic being talked about, and AMD loses. I say that as a huge AMD fan, running an RX 580 and R7 1700. 5700 XT is a great value, and great card. But it doesn't beat a 2070 Super, which is more so what it should compete with hardware-wise. AMD's driver is NOT AS GOOD, proven by the numbers. Here's 2 more for ya:

      RX 5700 XT: 9.754 TFLOPS
      2070 Super: 9.062 TFLOPS

      Nvidia did blatantly cheat in the past. They also paid people to make others look better, which isn't cheating, although is definitely abuse of capitalism and should have had them thrown in jail. They are a shitty company. I owned a 970 and 1070 too. Great hardware, terrible ethics which ruined my experience and made me switch. Can you reference that visual quality issue on Linux? I know it's a thing on Windows, but I don't care nor run it enough to care about Windows. But in the end, saying they're cheating today when that is way behind us is pathetic and makes you look pathetic, which you are for dwelling on their past instead of today. They are a shit company, 100%, but they're not cheaters as much as they're using their market to do as they please, which they can do. 100%. Cheating is not the same as optimizing their hardware, in ANY way. Yeah, if they optimized their video output and ruined clarity, that is their choice. It's not cheating, though. So stop regurgitating moronic emotional arguments that don't even please AMD fans with how crap they are in 2020.
      Nvidia Geforce RTX 2070 SuperAMD Radeon RX 5700 XTintel core i7-8700KGames: 4K, Ultra► Shadow of the Tomb Raider ► Resident Evil 2 Remake ► Grand Theft Auto...

      For 10% off your first purchase at Squarespace.com visit: www.squarespace.com/techyescityA Capture card can only tell so much of a story and today we have th...

      I was encoureged to do this video by the previous "3700X+5700XT or 9700K+2070 Super" test where I noticed significant graphics difference between NVIDIA and ...


      I can keep going, there are literally hundreds of examples.

      Like I said nVidia cheats and that doesn't make them faster it just makes them cheaters.

      Comment


      • #53
        I mean, if a card can push more through it, what does the hardware provisioning matter? If anything, that is cheating. I mean, unless GPU's AREN'T just massively parallel DMA machines, then TFLOPS is exactly what they will forever be measured with because that is, in the end, the only measure that matters is how many pixels they can push at maximum capacity. Hardware is the actual limit, software is the artificial limit. And AMD's limit is LOWER for the hardware, no matter how you slice up their driver quality pieces. Nvidia has a superior software stack to enable the hardware, full stop.

        And their image output might not be the best, but yet find me in the specs where they break the spec they implement? Like I said, if they want to optimize in areas they want to before outputting a picture, they are free to do so.

        The FUD and false claims are you morons chirping on shit from 2 decades ago, today. Yeah, Nvidia sucks and I won't ever own their hardware again. Yet you have claimed a lot of shit with no proof yourselves. Your performance wins you claim from the most recent article show the 2060S outpacing the 5700XT in a lot of tests. Imagine what a 2070[S] would do to it.

        Comment


        • #54
          Originally posted by abott View Post
          I mean, unless GPU's AREN'T just massively parallel DMA machines, then TFLOPS is exactly what they will forever be measured with because that is,
          Not sure what you mean by "massively parallel DMA machine" but GPUs aren't that simple. They are a set of serial pipelines (at least one short, simple compute pipeline plus at least one long, complex graphics pipeline) where most of the pipeline stages are a mix of fixed function hardware and programmable hardware (shaders), with the programmable part executed on a shared massively parallel floating point processor.

          Originally posted by abott View Post
          in the end, the only measure that matters is how many pixels they can push at maximum capacity.
          Yep, but that is only loosely related to TFLOPS. As long as the shared FP processor has enough TFLOPS to keep up with the work coming from the fixed function portions of the graphics pipeline, that's all you need. It's not quite that black and white, more of a 1/(1/a + 1/b + 1/c) thing, but TFLOPS only reflect the power of the shared floating point processor not the power and throughput limits of all the fixed function blocks.

          (not to be confused with a fixed function pipeline, which *only* has fixed function blocks and no programmable functions)

          Originally posted by abott View Post
          Hardware is the actual limit, software is the artificial limit. And AMD's limit is LOWER for the hardware, no matter how you slice up their driver quality pieces. Nvidia has a superior software stack to enable the hardware, full stop.
          Yep, hardware is the actual limit, however you are quoting performance numbers for one of the hardware blocks and treating that number as if it fully reflects hardware performance. It does not.

          The diagram at the link below is quite over-simplified (it's missing a bunch of red from the all-green blocks) but historically NVidia has been a bit faster at (devotes more silicon to) the red parts while we have been a bit faster at (devote more silicon to) the green parts.



          TFLOPS doesn't even fully measure the green part, just the multiply-accumulate instructions, and it doesn't touch the red parts at all.

          By the way this is where the "synthetics" part of a review is important. By writing tests that essentially bypass driver impact and focus on loading one specific hardware function at a time you can get a better idea of the performance of each block.

          Pick some charts off the following page where NVidia scores a lot higher than AMD - those are areas where NVidia hardware is faster than AMD. Tesselation (the first chart) is a good example of where NVidia HW is faster than AMD HW, but tesselation is almost completely independent of TFLOPS.

          https://www.anandtech.com/show/14618...5700-review/14
          Last edited by bridgman; 08 April 2020, 11:20 AM.
          Test signature

          Comment


          • #55
            Originally posted by TemplarGR View Post
            Yeah, you are worried that RDN2 is going to be very pricey so your next gpu is going to be a 3080ti, bad trolling, are you even trying bruh?

            As for "competitive RTX/DLSS features", DLSS is a joke. No one uses it. As is RTX, severely limited and it humpers performance so much no one plays with it enabled even in the tiny minority of titles that support it for some effects...

            In any case, worrying about RTX in a Linux-focused site is an oxymoron. I mean, how many titles you can play on Linux with RTX?
            If NAVI2 desktop GPU's utterly fail at competing with raytracing and compute features that RTX has then its poor value overall.

            DLSS 1.0 is a joke, DLSS 2.0 is quite a game changing and will see MUCH more adoption. If AMD has enough compute power they can do the same but I doubt their NAVI2 cards surpass even a 2070 when it comes to ray-tracing and general compute for stuff like deep learning super sampling.

            Who said RTX isn't coming to Linux or Vulkan? what makes you think that Linux will just sit out the next gen graphics for gaming which is heading towards raytracing in all next gen games?

            Also gaming has really not been all about compute performance, nvidia proved that which is why NVIDIA has actually been the one introducing and driving new technology for the past decade (as far as gamers are concerned!). NO GAMER asked AMD to make their GPU's compute focused! it was the DOOM of GCN!
            Last edited by theriddick; 08 April 2020, 05:20 AM.

            Comment


            • #56
              Originally posted by theriddick View Post

              If NAVI2 desktop GPU's utterly fail at competing with raytracing and compute features that RTX has then its poor value overall.

              DLSS 1.0 is a joke, DLSS 2.0 is quite a game changing and will see MUCH more adoption. If AMD has enough compute power they can do the same but I doubt their NAVI2 cards surpass even a 2070 when it comes to ray-tracing and general compute for stuff like deep learning super sampling.

              Who said RTX isn't coming to Linux or Vulkan? what makes you think that Linux will just sit out the next gen graphics for gaming which is heading towards raytracing in all next gen games?

              Also gaming has really not been all about compute performance, nvidia proved that which is why NVIDIA has actually been the one introducing and driving new technology for the past decade (as far as gamers are concerned!). NO GAMER asked AMD to make their GPU's compute focused! it was the DOOM of GCN!
              I think you fail to realize most next gen games are going to be developed for PS5 and Xbox-next....

              Comment


              • #57
                Originally posted by theriddick View Post
                NO GAMER asked AMD to make their GPU's compute focused!
                If you mean GAMER as in an end user buying and playing games, I think your statement is correct. The compute focus came more from discussions with game developers, not game players.

                Background here is pretty simple - within the game development community there were differing views about the future of game programming. One view was that there would be a big change towards replacing use of the (relatively fixed) graphics pipeline with more use of the (more general purpose) compute pipeline, while the opposing view was that things would stay pretty much as they were.

                We spent a bit more silicon area on compute (eg TFLOPS and async compute) while NVidia spent a bit more silicon area on the fixed-function portion of the graphics pipeline blocks (eg tesselation and ROPs).

                In the end, what happened was somewhere in between - the shift towards compute happened, but a bit slower than we expected and a bit faster than NVidia expected. The shift continues to happen, and IMO is an overlooked factor in the "FineWine" behaviour that people talk about.

                Originally posted by theriddick View Post
                it was the DOOM of GCN!
                Yes, DOOM ran well on GCN because the developers made good use of compute
                Last edited by bridgman; 08 April 2020, 07:30 PM.
                Test signature

                Comment


                • #58
                  Originally posted by bridgman View Post

                  Yes, DOOM ran well on GCN because the developers made good use of compute
                  Yes a few games did do quite well with strong compute gpu's. Not sure if DOOM ran better on AMD or NVIDIA however.

                  Comment

                  Working...
                  X