Announcement

Collapse
No announcement yet.

AMD, please give us EGL or decent direct rendering.

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Dandel View Post
    Yes, I do agree that having OpenGL ES/EGL working properly on ati drivers is really important. However, there's also other things that are "really really want" level. This is things like End users wanting what runs on Nvidia cards with Wine to also work on AMD/ATI cards.
    According to both bug reports Wine seems to be passing NaN to the driver for point size and the driver is returning GL_INVALID_VALUE, which presumably is different from what happens on the NVidia implementation.

    Behaviour when passing NaN to an OpenGL function is undefined IIRC, as long as the implementation does not crash, so I would argue that returning GL_INVALID_VALUE is perfectly legal and is probably the "most correct" behaviour.

    The winehq bug ticket has some discussion about a patch to intercept NaN values before they get to the driver; seems like the patch works in some place but not others, which suggests that the it is not blocking NaN under all conditions.
    Last edited by bridgman; 09-08-2011, 07:05 PM.

    Comment


    • #32
      Originally posted by bridgman View Post
      According to both bug reports Wine seems to be passing NaN to the driver for point size and the driver is returning GL_INVALID_VALUE, which presumably is different from what happens on the NVidia implementation.

      Behaviour when passing NaN to an OpenGL function is undefined IIRC, as long as the implementation does not crash, so I would argue that returning GL_INVALID_VALUE is perfectly legal and is probably the "most correct" behaviour.

      The winehq bug ticket has some discussion about a patch to intercept NaN values before they get to the driver; seems like the patch works in some place but not others, which suggests that the it is not blocking NaN under all conditions.
      Yes, I do agree that the NaN behavior is probably correct for the most part, however, when you look at the functions on a per-function basis using the Khronos Man pages, the definition of glPointSize clearly states that GL_INVALID_VALUE is to be returned when the value is less than or equal to 0 (on OpenGL 2.1, 3.3, 4.1) and GL_INVALID_OPERATION when the call is made between begin/end. (OpenGL 2.1 only )

      anyways, the other bug with OpenGl has to do with the Occlusion query... for some reason ut3, and other games based on UE3 do not run correctly, or have serious performance problems.

      Comment


      • #33
        Originally posted by Dandel View Post
        Yes, I do agree that the NaN behavior is probably correct for the most part, however, when you look at the functions on a per-function basis using the Khronos Man pages, the definition of glPointSize clearly states that GL_INVALID_VALUE is to be returned when the value is less than or equal to 0 (on OpenGL 2.1, 3.3, 4.1) and GL_INVALID_OPERATION when the call is made between begin/end. (OpenGL 2.1 only )
        Yep. My understanding is that only the error logic specific to an OpenGL function is included in the per-function spec, not all of the general error conditions such as passing NaN (which is undefined for all OpenGL calls AFAIK). I've never been 100% sure of that though... fortunately I'm not one of the OpenGL architects

        Originally posted by Dandel View Post
        anyways, the other bug with OpenGl has to do with the Occlusion query... for some reason ut3, and other games based on UE3 do not run correctly, or have serious performance problems.
        I did a search for "occlusion", "unreal", "ut3" etc.. on the bug tracker and didn't find anything related to occlusion queries on fglrx, just ticket 54 about NaN / GL_INVALID_VALUE. Have you seen a related bug ticket on the tracker anywhere ?
        Last edited by bridgman; 09-08-2011, 08:06 PM.

        Comment


        • #34
          Originally posted by bridgman View Post
          Yep. My understanding is that only the error logic specific to this OpenGL function is included here, not all of the general error conditions such as passing NaN (which is undefined for all OpenGL calls AFAIK). I've never been 100% sure of that though... fortunately I'm not one of the OpenGL architects



          I did a search for "occlusion", "unreal", "ut3" etc.. on the bug tracker and didn't find anything related to occlusion queries on fglrx, just ticket 54 about NaN / GL_INVALID_VALUE. Have you seen a related bug ticket on the tracker anywhere ?
          No, actually the bug that progressed on winehq is the most up to date... it appears that ut3 has problems due to occlusion queries giving the same output over and over again...

          see Comment 13 on winehq's bug report: http://bugs.winehq.org/show_bug.cgi?id=23048#c13

          there is some bug related to fglrx methods of handling the occlusion queries that causes unreal tournament 3 to have problems. I can also confirm with some tests that ut3 does not run at all with the current drivers. Currently when i test on a clean install the program simply does not render anything bug a black screen when using Catalyst 11.8. ( sabayon 6 latest updates)

          Comment


          • #35
            Originally posted by Dandel View Post
            No, actually the bug that progressed on winehq is the most up to date... it appears that ut3 has problems due to occlusion queries giving the same output over and over again...
            Thanks. I had looked at that log but rightly or wrongly my takeaway was "it's making the same call over and over again with the same parameters and getting the same result back each time, which makes sense". Presumably it's looking for a *different* result but no idea what result it's looking for or why it thinks the result it's getting is incorrect.

            Comment


            • #36
              Originally posted by entropy View Post
              Hum, is that any different from using the "System Settings" facility?

              System Settings -> Configure desktop effects -> Advanced

              If I check "Disable functionality checks" and "Enable direct rendering" I cannot
              see any performance degradation with direct rendering vs. indirect rendering.
              Because the setting is completely ignored and has been removed in master.

              Comment


              • #37
                Good to see that my attempt to improve this stuff is moving a bit

                One question though. bridgman, what are the plans to improve this? Could this be better with the next catalyst release? Please don't say years...

                Comment


                • #38
                  Originally posted by mgraesslin View Post
                  Because the setting is completely ignored and has been removed in master.
                  Ok, thanks.

                  On the other hand - why is it still there?
                  Is it that there are some K* guidelines to forbid changing gui layouts in a bug fix release?
                  I mean things like that might also give you woes in a bug report,
                  when it's not perfectly clear to average joe (like me) what's actually going due settings being misinterpreted.

                  Comment


                  • #39
                    On the other hand - why is it still there?
                    because code had to be adjusted to ensure that everything works fine.

                    I mean things like that might also give you woes in a bug report,
                    when it's not perfectly clear to average joe (like me) what's actually going due settings being misinterpreted.
                    That's why we fixed it. But we a very conservative about pushing changes into the branches. Only clear bug fixes go into the branch. Changes to the behavior (this is one) are going only into feature releases and code which would cause a regression can only go into feature releases. Removing the checkbox is actually a regression for users of the proprietary NVIDIA driver as that one allowed to enable/disable direct rendering.

                    Comment


                    • #40
                      I see, this is a driver specific behaviour.
                      My bad. Thanks for replying!

                      Comment


                      • #41
                        Why Bother with AMD/ATI?

                        Frankly, I would not recommend AMD GPU, full stop. Please dump or disable it and go get a Nvidia GPU.

                        Comment


                        • #42
                          Originally posted by sgprince View Post
                          Frankly, I would not recommend AMD GPU, full stop. Please dump or disable it and go get a Nvidia GPU.
                          If you use Linux and buy Nvidia you are shooting yourself in the foot. Nvidia is no friend of free software, whilst AMD time and again shows that they want to work with us who use FOSS.

                          When buy AMD you are helping FOSS and Linux, when you buy Nvidia you are supporting all things proprietary.

                          Comment


                          • #43
                            Originally posted by Rallos Zek View Post
                            If you use Linux and buy Nvidia you are shooting yourself in the foot. Nvidia is no friend of free software, whilst AMD time and again shows that they want to work with us who use FOSS.

                            When buy AMD you are helping FOSS and Linux, when you buy Nvidia you are supporting all things proprietary.
                            Well, that's the ambiguity of the word 'support'.

                            There are people claiming that Nvidia has the best Linux support,
                            while other point out that they're not supporting Linux at all.
                            Strange enough, it seems they both have a point.

                            Comment


                            • #44
                              Originally posted by Rallos Zek View Post
                              If you use Linux and buy Nvidia you are shooting yourself in the foot. Nvidia is no friend of free software, whilst AMD time and again shows that they want to work with us who use FOSS.

                              When buy AMD you are helping FOSS and Linux, when you buy Nvidia you are supporting all things proprietary.
                              The FOSS driver is not 100% optimized as has been explained and discussed over and over. Performance wise, I try to recall the figure but for e.g., 70% of the binary performance. For some, that is enough but not necessarily everyone.

                              I think if you are lacking features and performance, then one can be justified at considering options that allow for full performance and features.

                              The same features continue to be absent in the open source driver (look at the radeon feature matrix) and the performance is not fully optimized so obviously, it's not 100% open and the resources invested are lacking. It is just a matter of opinion whether you accept this caveat and can live with it.

                              Even though the proprietary driver is vilified here, if one buys a brand new card, they expect full features and performance, whatever the card can do. Unfortunately, the benchmark or comparative scale is via the latest Windows OS. But, these hardware companies are catering their editions to MS and the MS OS. Unless the open drivers are open to the point in which there is full openness to go from, there is always restrictions. So, it's based on tradeoffs?

                              Also, buying the AMD card, one is expected to bug track and troubleshoot if you want to contribute to the expanding improvement? It's a good idea but I am curious what time and expertise is needed... there is not as much investment from the company itself so you are expected to do a significant share.

                              Also, the improvement or progress of the FOSS driver is not motivated by money as much as the proprietary so there is a positive and negative benefit to that.
                              Last edited by Panix; 09-10-2011, 12:47 PM.

                              Comment


                              • #45
                                I find the discussion of which company (AMD or Nvidia) supports linux better and how they do it as interesting as the next person, but lets stay on topic here. Were discussing the fglrx and what needs to be done to improve it.

                                Comment

                                Working...
                                X