Announcement

Collapse
No announcement yet.

RadeonSI Lands Another "Very Large" Optimization To Further Boost SPECViewPerf

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Teggs
    replied
    Originally posted by coder View Post
    It's funny how easily you're triggered at AMD by an unverified comment from a pseudonymous forum poster.

    We don't actually know if there was a use case for this, other than those benchmarks.
    I'm happy that you were amused. Did you notice this part?

    Originally posted by Teggs View Post
    Is this actually the way it is?
    Perhaps I've seen too much foolery in the hardware/driver segment and should just wait for more information, if it comes, before castigating possible additional foolery. Rumours that Nvidia is considering implementing their GPP program again and deliberately ceasing GPU production in November create a poor frame of reference, even if that's the other company. :/

    Leave a comment:


  • coder
    replied
    Originally posted by smitty3268 View Post
    This is about some suit in an office looking at the open source drivers and comparing them to their binary drivers, and saying "why is this benchmark so slow? we need to fix that before we can use the new drivers".
    How do you know that?

    I don't buy your explanation that it was done purely for the benchmarks, because official SPEC benchmark results have to be run in controlled conditions.

    It seems quite likely to me that one or more developers of workstation apps requested this optimization. Or maybe it's a port of an optimization controlled by the same switch, which some existing workstation apps are currently using.

    Leave a comment:


  • coder
    replied
    Originally posted by Teggs View Post
    Are they just hoping their customers won't notice?
    It's funny how easily you're triggered at AMD by an unverified comment from a pseudonymous forum poster.

    We don't actually know if there was a use case for this, other than those benchmarks.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by coder View Post
    Well, it seems to me that app developers could enable this option, themselves. I'm guessing that's the intention.
    It's absolutely not.

    This is about some suit in an office looking at the open source drivers and comparing them to their binary drivers, and saying "why is this benchmark so slow? we need to fix that before we can use the new drivers".

    It's dumb, but that's the way the world works.

    These are the kind of dumb hacks i imagine the proprietary drivers are full of.

    Leave a comment:


  • blackshard
    replied
    Originally posted by jabl View Post
    Guess it's too much to ask those workstation apps and specviewperf to refactor their rendering backends every few decades?
    Yes, it's too much.
    Would you redo a looooot of work from scratch just to get at the end the same output, with possible new bugs or different behaviours?
    That's not so easy to redo things that work and have been used by thousand others to build their own programs or solid models or whatever without risking to break.

    That's the same reason the AMD closed-source driver is still around for workstation software: AMD could kick it out of the window right now and replace with a properly written and fast opensource driver, but then how much existing workstation software would break?

    Leave a comment:


  • jabl
    replied
    Guess it's too much to ask those workstation apps and specviewperf to refactor their rendering backends every few decades? VBO's been around since 2003 and programmable shaders since 2004 in core OpenGL, in ARB extensions even earlier. Yet here we are in 2021 optimizing frickin display lists, an optimization designed for early 1990'ies Silicon Graphics fixed function GPU's.
    Last edited by jabl; 19 October 2021, 04:35 PM.

    Leave a comment:


  • Teggs
    replied
    Originally posted by smitty3268 View Post
    Important context:

    This is a hack for the specviewperf benchmark only. You have to manually enable the option on a per-application basis to use it since it's non-compliant with the OpenGL spec.

    I'm not sure how i really feel about such things going into the driver, but I guess I understand why it's done.
    Are they just hoping their customers won't notice? I can understand optimising for workloads used in benchmarks, since unless the benchmark is worthless those workloads are common in other 'real' programs. But creating specific, non-spec-compliant (the irony) hacks that don't benefit anything else just to up a benchmark score seems to defeat the purpose. It also indicates that the company is more interested in appearing to be good than actually being good, which seems counterproductive with a professional crowd. Is this actually the way it is?

    I'm not avoiding that Nvidia does things like this also, but they are known for it as well and their reputation suffers accordingly.

    Leave a comment:


  • coder
    replied
    Originally posted by smitty3268 View Post
    I mean sure, technically if you want to spend the time carefully analyzing an app and determine that the way it does it's buffer operations is safe you could enable this. No idea how common that is.

    I don't anticipate that happening for anything but workstation benchmarks they need to super-optimize for marketing purposes, though.
    Well, it seems to me that app developers could enable this option, themselves. I'm guessing that's the intention.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by coder View Post
    Are you certain there's no way it could benefit certain apps? IMO, that would make it legit, so long as it's always off by default.
    I mean sure, technically if you want to spend the time carefully analyzing an app and determine that the way it does it's buffer operations is safe you could enable this. No idea how common that is.

    I don't anticipate that happening for anything but workstation benchmarks they need to super-optimize for marketing purposes, though.

    Leave a comment:


  • coder
    replied
    Originally posted by smitty3268 View Post
    This is a hack for the specviewperf benchmark only.
    Are you certain there's no way it could benefit certain apps? IMO, that would make it legit, so long as it's always off by default.

    Leave a comment:

Working...
X