Announcement

Collapse
No announcement yet.

RadeonSI Lands Another "Very Large" Optimization To Further Boost SPECViewPerf

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
    coder
    Senior Member

  • coder
    replied
    Originally posted by smitty3268 View Post
    It's not speculation.
    You've provided no evidence to the contrary.

    Originally posted by smitty3268 View Post
    This conversation is boring, i'm done.
    Cool story, bro.

    Leave a comment:

  • smitty3268
    Senior Member

  • smitty3268
    replied
    Originally posted by coder View Post
    You made an incredibly specific claim that this was being done for no reason other than to juice their SPEC bench numbers. Don't make claims you can't back up with evidence. It's as simple as that.

    If you're going to speculate, then be clear that it's nothing more than speculation.
    It's not speculation.

    This conversation is boring, i'm done.

    Leave a comment:

  • coder
    Senior Member

  • coder
    replied
    Originally posted by smitty3268 View Post
    You're the one accusing me of cynicism with no proof. I'm just relaying what is going on from what I've seen.
    You made an incredibly specific claim that this was being done for no reason other than to juice their SPEC bench numbers. Don't make claims you can't back up with evidence. It's as simple as that.

    If you're going to speculate, then be clear that it's nothing more than speculation.

    Leave a comment:

  • smitty3268
    Senior Member

  • smitty3268
    replied
    Originally posted by coder View Post
    You can't just make an allegation and then expect to defend it with a weak line like that!

    Don't speak as if you know something, when all you have to support it is cynicism.
    You're the one accusing me of cynicism with no proof. I'm just relaying what is going on from what I've seen.

    No, that's not what I'm thinking. I'm speculating that it had come either by way of a specific request, or the simple knowledge that some workstation apps utilize this behavior of their proprietary driver and they are duplicating it, in an attempt to provide performance-parity.
    And that's ridiculous. If it were the case, they would have enabled it for those apps. It's one line of XML to do so. They did so for 1 app, and no others, because that's exactly the app they were targeting all along. You can look at the gitlab issues if you don't believe me.

    Leave a comment:

  • coder
    Senior Member

  • coder
    replied
    Originally posted by smitty3268 View Post
    It's impossible to prove a negative, so it's clear to me that you'll never be convinced by anything anyone says.
    You can't just make an allegation and then expect to defend it with a weak line like that!

    Don't speak as if you know something, when all you have to support it is cynicism.

    Originally posted by smitty3268 View Post
    10 years from now you can still be saying someone might take advantage of it next year.
    No, that's not what I'm thinking. I'm speculating that it had come either by way of a specific request, or the simple knowledge that some workstation apps utilize this behavior of their proprietary driver and they are duplicating it, in an attempt to provide performance-parity.

    Leave a comment:

  • smitty3268
    Senior Member

  • smitty3268
    replied
    Originally posted by chocolate View Post
    Mesa already applies different hacks to various programs and games:
    https://gitlab.freedesktop.org/mesa/...-defaults.conf

    Such programs will never be fixed, but nonetheless need to run decently; otherwise, end users will blame the underlying operating system or drivers, since other solutions already run them just fine. Non-compliant behaviour is properly isolated and only catered to in those documented cases. So, no need to worry. Cheers.
    AFAIK the existing app hacks all fall under the category of "This app is doing something non-conformant with the OpenGL spec so we have to hack around that to make sure it runs".

    This is (AFAIK) a new category of hack, where the app is perfectly compliant and runs, but the hack is put in place to do something unsafe in the driver in order to improve performance.

    I suppose you could argue the "mesa_glthread" option is similar, although I'd argue it's in a 3rd category of it's own for a few reasons i won't bore everyone with here.
    smitty3268
    Senior Member
    Last edited by smitty3268; 20 October 2021, 04:56 PM.

    Leave a comment:

  • blackshard
    Senior Member

  • blackshard
    replied
    Originally posted by jabl View Post

    That is indeed a good argument for never doing anything. Why risk breaking things? Speaking of which, why are people implementing optimizations in the GPU driver, that risks breaking things too! While we're at it, lets fire all the programmers in the world, as any change they make risks causing a new bug or changing behavior.
    Optimizations in the driver are totatlly different than completely refactoring the rendering part of a complex software like, dunno, a CAD or whatever.
    It takes a lot of time and effort, and very specialized and trained crew: opengl programming model is not exactly the easiest thing around the corner, glsl requires people to understand how the rendering happen, not to talk about multi-threading issues here and there.

    You can be ironic as much as you would, companies do their assessments and move accordingly.

    Leave a comment:

  • chocolate
    Senior Member

  • chocolate
    replied
    Mesa already applies different hacks to various programs and games:
    https://gitlab.freedesktop.org/mesa/...-defaults.conf

    Such programs will never be fixed, but nonetheless need to run decently; otherwise, end users will blame the underlying operating system or drivers, since other solutions already run them just fine. Non-compliant behaviour is properly isolated and only catered to in those documented cases. So, no need to worry. Cheers.

    Leave a comment:

  • jabl
    Senior Member

  • jabl
    replied
    Originally posted by blackshard View Post
    Would you redo a looooot of work from scratch just to get at the end the same output, with possible new bugs or different behaviours?
    That is indeed a good argument for never doing anything. Why risk breaking things? Speaking of which, why are people implementing optimizations in the GPU driver, that risks breaking things too! While we're at it, lets fire all the programmers in the world, as any change they make risks causing a new bug or changing behavior.

    More seriously, the risk of regressions have to be balanced against the upsides. For instance, if you're interested in performance (which people evidently are, why else would they be spending effort on improving display list performance?), VBO's offer a much better and more flexible approach to managing vertex data in GPU memory than display lists, allowing the programmer to reduce unnecessary data transfers to the GPU. Modern OpenGL enables techniques to reduce the host side overhead of communicating with the GPU (AZDO), and allows preparing buffers in parallel. Etc. etc.

    That's not so easy to redo things that work and have been used by thousand others to build their own programs or solid models or whatever without risking to break.
    Sure, never said it would be easy. Still, a bit surprising and disappointing that the workstation vendors have evidently made zero effort in taking advantage of current technology.

    That's the same reason the AMD closed-source driver is still around for workstation software: AMD could kick it out of the window right now and replace with a properly written and fast opensource driver, but then how much existing workstation software would break?
    That is indeed a good argument for modern graphics API's, as there's much less impedance mismatch between the programming model and how the actual GPU hardware works. So much less space for driver quirks in emulating the old fixed function pipeline.

    Leave a comment:

  • smitty3268
    Senior Member

  • smitty3268
    replied
    Originally posted by coder View Post
    How do you know that?

    I don't buy your explanation that it was done purely for the benchmarks, because official SPEC benchmark results have to be run in controlled conditions.

    It seems quite likely to me that one or more developers of workstation apps requested this optimization. Or maybe it's a port of an optimization controlled by the same switch, which some existing workstation apps are currently using.
    It's impossible to prove a negative, so it's clear to me that you'll never be convinced by anything anyone says. 10 years from now you can still be saying someone might take advantage of it next year.

    All I will say is that if you read the discussions on gitlab (many of them over the past year, not just this single one) it's very clear what's going on. Feel free to draw your own conclusions, I'm not trying to convince anyone.
    smitty3268
    Senior Member
    Last edited by smitty3268; 20 October 2021, 01:18 AM.

    Leave a comment:

Working...
X