Announcement

Collapse
No announcement yet.

R500 Mesa Is Still No Match To An Old Catalyst Driver

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Originally posted by RealNC View Post
    And the result is a half-assed desktop:

    With the OSS drivers, you can get Xv and GL video with VSync without the need to disable compositing. But the 3D speed of the drivers cripple the cards. Half-assed #1.

    With fglrx you can use the available 3D power of your card, but the workarounds and compromises needed for fglrx (watch only with compositing disabled, don't use Xv, enable VSync in CCC, bla bla) are crippling the 2D and video experience. Half-assed #2.
    I didn't say the solution was optimal. Just that it works.
    You don't have to disable compositing to fix the tearing.
    The "only" thing you have to do, is to enable vsync and use mplayer with opengl as output.

    Comment


    • #47
      Originally posted by marek View Post
      I bet ColorTiling isn't enabled on Lucid. This one should improve performance A LOT with r300c and even more with r300g since only the latter has full tiling support.

      -Marek
      yeah! thanks for the hint. finally I am able to play openarena under KMS: http://www.rojtberg.net/390/gallium3d-is-taking-over-2/

      Comment


      • #48
        Originally posted by tball View Post
        I didn't say the solution was optimal. Just that it works.
        You don't have to disable compositing to fix the tearing.
        The "only" thing you have to do, is to enable vsync and use mplayer with opengl as output.
        No, that won't do. It's still tearing. It only works with composite disabled (KDE will disable it when you watch in full screen, but you're assuming that we want to watch in fullscreen.)

        Comment


        • #49
          OpenGL + Vsync removes tearing for me with fglrx under smplayer, composition or no composition.

          Comment


          • #50
            Originally posted by RealNC View Post
            No, that won't do. It's still tearing. It only works with composite disabled (KDE will disable it when you watch in full screen, but you're assuming that we want to watch in fullscreen.)
            Yes of course I assume you would watch the movie in fullscreen :-)

            Comment


            • #51
              Originally posted by Kano View Post
              Well i also like xbmc, now xbmc gained vaapi support too which is a good move. xbmc only uses opengl to render, but fglrx fails to use the auto mode which is usally the same as glsl renderer. nvidia cards have got no problems using glsl renderer.
              9 times out of 10 this is caused by stupid developers using Cg syntax in GLSL shaders. I can't be bothered to go through XBMC's shaders right now but I'm willing to bet on this.

              AMD's drivers may have tons of bugs but their shader compiler is reliable and strict. Nvidia's compiler, on the other hand, is a weird half-breed that accepts illegal HLSL/Cg syntax unless explicitly told not to. Add the fact that most developers are clueless and the result is a compatibility shitstorm on the users' backs - or ugly vendor lock-in, depending on how you look at it.

              Mesa will also pay the price once the next round of distros picks GLSL support.

              Comment


              • #52
                Well i think ATI cards are not really common for xbmc dev, even VAAPI support is developed with NVIDIA...

                Comment


                • #53
                  Although if what BlackStar says is true, the biggest problem is not the developers not having ATi cards but instead being completely oblivious to language standards and implementing things wrong.

                  Comment


                  • #54
                    Originally posted by nanonyme View Post
                    Although if what BlackStar says is true, the biggest problem is not the developers not having ATi cards but instead being completely oblivious to language standards and implementing things wrong.
                    To their defence, some mistakes are nigh impossible to catch without rigorous testing:
                    Code:
                    gl_FragColor = vec4(color, 1);
                    That's incorrect by GLSL standards (no implicit conversions from int to float), yet nvidia will accept it. Fglrx/mesa, on the other hand, will raise an error. (The correct code is, of course, "vec4(color, 1.0)").

                    Head over to the blender forums or gamedev.net and you'll see bugs like these are very very common. Most developers won't think twice before signing off a shader (hey, it works fine here!) but it's the users that pay the price in the end.

                    In my experience, at least 50% of all GLSL shaders contain such defects. Maybe XBMC has actually hit a fglrx bug (does it work on mesa?) but I think a genuine programmer error is at least as likely.

                    Comment


                    • #55
                      Hmm, I'll make a note of that if I end up writing OpenGL. It kinda puzzles me though that you can't have the compiler cry out for stuff like that. Should probably read into vec4 implementations.

                      Comment


                      • #56
                        Originally posted by nanonyme View Post
                        It kinda puzzles me though that you can't have the compiler cry out for stuff like that.
                        Which compiler? GLSL is compiled by the gfx driver when your program is running.
                        The driver will complain, but your IDE doesn't (cannot).

                        Comment


                        • #57
                          Oh, ick. Couldn't you write a GLSL validator for use with IDE's though?

                          Comment


                          • #58
                            (namely a parser, not a compiler; a parser should be able to spot the things against the standard, right?)

                            Comment


                            • #59
                              Even on nvidia, you can enable strict mode using a #version pragma (which will turn most portability warnings into errors and stop the code from running). Most developers don't bother to add version pragmas either.

                              An offline validator wouldn't really help. No two GLSL implementations are 100% the same, so even if your program passes the validator it might not run on real hardware (intel being the biggest offender, but fglrx and nvidia have their share of bugs, too. Just try playing with arrays of structures/uniforms/varyings to see what I mean ).

                              So far, I've found the least painful approach is to develop on Ati, port to Nvidia (about 95% chances of working out of the box) and try to port to Intel if absolutely necessary (about 0% chances of running without modifications). If you go with Nvidia first and then port to Ati, you'll have about 80% chance of running without issues, so it's not as efficient. If you go with Intel first, you'll simply waste your time - their OpenGL drivers simply don't follow the specs to any reasonable extent (admittedly, it's better on the Linux side, but their Windows drivers are simply awful).

                              In any case, you will need at least two GPUs to test - at least if you value portability. Yes, we have it easy nowadays. Back in 2004, only nvidia produced working drivers for OpenGL, everything else was utter garbage!

                              Comment


                              • #60
                                Mesa does have a stand-alone GLSL compiler:
                                http://mesa3d.org/shading.html

                                (And of course you can always run a software implementation of Mesa regardless of the 3D driver used. It's painfully slow, but can be good enough to check for problems.)

                                Comment

                                Working...
                                X