Announcement

Collapse
No announcement yet.

RadeonSI Lands Another "Very Large" Optimization To Further Boost SPECViewPerf

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • RadeonSI Lands Another "Very Large" Optimization To Further Boost SPECViewPerf

    Phoronix: RadeonSI Lands Another "Very Large" Optimization To Further Boost SPECViewPerf

    In recent months we have seen a lot of RadeonSI optimizations focused on SPECViewPerf with AMD seemingly trying to get this open-source OpenGL driver into very capable shape moving forward for workstation GL workloads. Hitting Mesa 22.0-devel today is yet another round of patches for tuning SPECViewPerf...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Important context:

    This is a hack for the specviewperf benchmark only. You have to manually enable the option on a per-application basis to use it since it's non-compliant with the OpenGL spec.

    I'm not sure how i really feel about such things going into the driver, but I guess I understand why it's done.

    Comment


    • #3
      Originally posted by smitty3268 View Post
      This is a hack for the specviewperf benchmark only.
      Are you certain there's no way it could benefit certain apps? IMO, that would make it legit, so long as it's always off by default.

      Comment


      • #4
        Originally posted by coder View Post
        Are you certain there's no way it could benefit certain apps? IMO, that would make it legit, so long as it's always off by default.
        I mean sure, technically if you want to spend the time carefully analyzing an app and determine that the way it does it's buffer operations is safe you could enable this. No idea how common that is.

        I don't anticipate that happening for anything but workstation benchmarks they need to super-optimize for marketing purposes, though.

        Comment


        • #5
          Originally posted by smitty3268 View Post
          I mean sure, technically if you want to spend the time carefully analyzing an app and determine that the way it does it's buffer operations is safe you could enable this. No idea how common that is.

          I don't anticipate that happening for anything but workstation benchmarks they need to super-optimize for marketing purposes, though.
          Well, it seems to me that app developers could enable this option, themselves. I'm guessing that's the intention.

          Comment


          • #6
            Originally posted by smitty3268 View Post
            Important context:

            This is a hack for the specviewperf benchmark only. You have to manually enable the option on a per-application basis to use it since it's non-compliant with the OpenGL spec.

            I'm not sure how i really feel about such things going into the driver, but I guess I understand why it's done.
            Are they just hoping their customers won't notice? I can understand optimising for workloads used in benchmarks, since unless the benchmark is worthless those workloads are common in other 'real' programs. But creating specific, non-spec-compliant (the irony) hacks that don't benefit anything else just to up a benchmark score seems to defeat the purpose. It also indicates that the company is more interested in appearing to be good than actually being good, which seems counterproductive with a professional crowd. Is this actually the way it is?

            I'm not avoiding that Nvidia does things like this also, but they are known for it as well and their reputation suffers accordingly.

            Comment


            • #7
              Guess it's too much to ask those workstation apps and specviewperf to refactor their rendering backends every few decades? VBO's been around since 2003 and programmable shaders since 2004 in core OpenGL, in ARB extensions even earlier. Yet here we are in 2021 optimizing frickin display lists, an optimization designed for early 1990'ies Silicon Graphics fixed function GPU's.
              Last edited by jabl; 19 October 2021, 04:35 PM.

              Comment


              • #8
                Originally posted by jabl View Post
                Guess it's too much to ask those workstation apps and specviewperf to refactor their rendering backends every few decades?
                Yes, it's too much.
                Would you redo a looooot of work from scratch just to get at the end the same output, with possible new bugs or different behaviours?
                That's not so easy to redo things that work and have been used by thousand others to build their own programs or solid models or whatever without risking to break.

                That's the same reason the AMD closed-source driver is still around for workstation software: AMD could kick it out of the window right now and replace with a properly written and fast opensource driver, but then how much existing workstation software would break?

                Comment


                • #9
                  Originally posted by coder View Post
                  Well, it seems to me that app developers could enable this option, themselves. I'm guessing that's the intention.
                  It's absolutely not.

                  This is about some suit in an office looking at the open source drivers and comparing them to their binary drivers, and saying "why is this benchmark so slow? we need to fix that before we can use the new drivers".

                  It's dumb, but that's the way the world works.

                  These are the kind of dumb hacks i imagine the proprietary drivers are full of.

                  Comment


                  • #10
                    Originally posted by Teggs View Post
                    Are they just hoping their customers won't notice?
                    It's funny how easily you're triggered at AMD by an unverified comment from a pseudonymous forum poster.

                    We don't actually know if there was a use case for this, other than those benchmarks.

                    Comment

                    Working...
                    X