Announcement

Collapse
No announcement yet.

RadeonSI Lands Another "Very Large" Optimization To Further Boost SPECViewPerf

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by smitty3268 View Post
    This is about some suit in an office looking at the open source drivers and comparing them to their binary drivers, and saying "why is this benchmark so slow? we need to fix that before we can use the new drivers".
    How do you know that?

    I don't buy your explanation that it was done purely for the benchmarks, because official SPEC benchmark results have to be run in controlled conditions.

    It seems quite likely to me that one or more developers of workstation apps requested this optimization. Or maybe it's a port of an optimization controlled by the same switch, which some existing workstation apps are currently using.

    Comment


    • #12
      Originally posted by coder View Post
      It's funny how easily you're triggered at AMD by an unverified comment from a pseudonymous forum poster.

      We don't actually know if there was a use case for this, other than those benchmarks.
      I'm happy that you were amused. Did you notice this part?

      Originally posted by Teggs View Post
      Is this actually the way it is?
      Perhaps I've seen too much foolery in the hardware/driver segment and should just wait for more information, if it comes, before castigating possible additional foolery. Rumours that Nvidia is considering implementing their GPP program again and deliberately ceasing GPU production in November create a poor frame of reference, even if that's the other company. :/

      Comment


      • #13
        Originally posted by coder View Post
        How do you know that?

        I don't buy your explanation that it was done purely for the benchmarks, because official SPEC benchmark results have to be run in controlled conditions.

        It seems quite likely to me that one or more developers of workstation apps requested this optimization. Or maybe it's a port of an optimization controlled by the same switch, which some existing workstation apps are currently using.
        It's impossible to prove a negative, so it's clear to me that you'll never be convinced by anything anyone says. 10 years from now you can still be saying someone might take advantage of it next year.

        All I will say is that if you read the discussions on gitlab (many of them over the past year, not just this single one) it's very clear what's going on. Feel free to draw your own conclusions, I'm not trying to convince anyone.
        Last edited by smitty3268; 20 October 2021, 01:18 AM.

        Comment


        • #14
          Originally posted by blackshard View Post
          Would you redo a looooot of work from scratch just to get at the end the same output, with possible new bugs or different behaviours?
          That is indeed a good argument for never doing anything. Why risk breaking things? Speaking of which, why are people implementing optimizations in the GPU driver, that risks breaking things too! While we're at it, lets fire all the programmers in the world, as any change they make risks causing a new bug or changing behavior.

          More seriously, the risk of regressions have to be balanced against the upsides. For instance, if you're interested in performance (which people evidently are, why else would they be spending effort on improving display list performance?), VBO's offer a much better and more flexible approach to managing vertex data in GPU memory than display lists, allowing the programmer to reduce unnecessary data transfers to the GPU. Modern OpenGL enables techniques to reduce the host side overhead of communicating with the GPU (AZDO), and allows preparing buffers in parallel. Etc. etc.

          That's not so easy to redo things that work and have been used by thousand others to build their own programs or solid models or whatever without risking to break.
          Sure, never said it would be easy. Still, a bit surprising and disappointing that the workstation vendors have evidently made zero effort in taking advantage of current technology.

          That's the same reason the AMD closed-source driver is still around for workstation software: AMD could kick it out of the window right now and replace with a properly written and fast opensource driver, but then how much existing workstation software would break?
          That is indeed a good argument for modern graphics API's, as there's much less impedance mismatch between the programming model and how the actual GPU hardware works. So much less space for driver quirks in emulating the old fixed function pipeline.

          Comment


          • #15
            Mesa already applies different hacks to various programs and games:


            Such programs will never be fixed, but nonetheless need to run decently; otherwise, end users will blame the underlying operating system or drivers, since other solutions already run them just fine. Non-compliant behaviour is properly isolated and only catered to in those documented cases. So, no need to worry. Cheers.

            Comment


            • #16
              Originally posted by jabl View Post

              That is indeed a good argument for never doing anything. Why risk breaking things? Speaking of which, why are people implementing optimizations in the GPU driver, that risks breaking things too! While we're at it, lets fire all the programmers in the world, as any change they make risks causing a new bug or changing behavior.
              Optimizations in the driver are totatlly different than completely refactoring the rendering part of a complex software like, dunno, a CAD or whatever.
              It takes a lot of time and effort, and very specialized and trained crew: opengl programming model is not exactly the easiest thing around the corner, glsl requires people to understand how the rendering happen, not to talk about multi-threading issues here and there.

              You can be ironic as much as you would, companies do their assessments and move accordingly.

              Comment


              • #17
                Originally posted by chocolate View Post
                Mesa already applies different hacks to various programs and games:
                https://gitlab.freedesktop.org/mesa/...-defaults.conf

                Such programs will never be fixed, but nonetheless need to run decently; otherwise, end users will blame the underlying operating system or drivers, since other solutions already run them just fine. Non-compliant behaviour is properly isolated and only catered to in those documented cases. So, no need to worry. Cheers.
                AFAIK the existing app hacks all fall under the category of "This app is doing something non-conformant with the OpenGL spec so we have to hack around that to make sure it runs".

                This is (AFAIK) a new category of hack, where the app is perfectly compliant and runs, but the hack is put in place to do something unsafe in the driver in order to improve performance.

                I suppose you could argue the "mesa_glthread" option is similar, although I'd argue it's in a 3rd category of it's own for a few reasons i won't bore everyone with here.
                Last edited by smitty3268; 20 October 2021, 04:56 PM.

                Comment


                • #18
                  Originally posted by smitty3268 View Post
                  It's impossible to prove a negative, so it's clear to me that you'll never be convinced by anything anyone says.
                  You can't just make an allegation and then expect to defend it with a weak line like that!

                  Don't speak as if you know something, when all you have to support it is cynicism.

                  Originally posted by smitty3268 View Post
                  10 years from now you can still be saying someone might take advantage of it next year.
                  No, that's not what I'm thinking. I'm speculating that it had come either by way of a specific request, or the simple knowledge that some workstation apps utilize this behavior of their proprietary driver and they are duplicating it, in an attempt to provide performance-parity.

                  Comment


                  • #19
                    Originally posted by coder View Post
                    You can't just make an allegation and then expect to defend it with a weak line like that!

                    Don't speak as if you know something, when all you have to support it is cynicism.
                    You're the one accusing me of cynicism with no proof. I'm just relaying what is going on from what I've seen.

                    No, that's not what I'm thinking. I'm speculating that it had come either by way of a specific request, or the simple knowledge that some workstation apps utilize this behavior of their proprietary driver and they are duplicating it, in an attempt to provide performance-parity.
                    And that's ridiculous. If it were the case, they would have enabled it for those apps. It's one line of XML to do so. They did so for 1 app, and no others, because that's exactly the app they were targeting all along. You can look at the gitlab issues if you don't believe me.

                    Comment


                    • #20
                      Originally posted by smitty3268 View Post
                      You're the one accusing me of cynicism with no proof. I'm just relaying what is going on from what I've seen.
                      You made an incredibly specific claim that this was being done for no reason other than to juice their SPEC bench numbers. Don't make claims you can't back up with evidence. It's as simple as that.

                      If you're going to speculate, then be clear that it's nothing more than speculation.

                      Comment

                      Working...
                      X