Announcement

Collapse
No announcement yet.

Open-Source GPU Drivers Causing Headaches In KDE 4.5

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Hephasteus View Post
    Sorry R300 gpu is DX7. It's not a opengl 2.0 gpu and can't be. I think R500 is where you have to go for that. NV30 is nvidia 5 series. That's a dx8 gpu. Not a dx9 gpu and not a opengl 2.0 gpu.
    That's incorrect. R300 supports DX9 (SM2.0a) as does NV30. R400 is SM2.0b and R500 is SM3.0.

    OpenGL 2.1 demands some features that are not supported by those GPUs, namely texture repeat/mirror modes for NPOT textures (implemented in R600), separate stencil (R500?) and separate blend equations (Geforce 7 series, IIRC).

    The closed-source drivers for those cards advertize OpenGL 2.1 even if they don't fully support it. The reason they get away with this is that few games/apps use those unsupported features (mainly because they are not all that useful and because they are not that well-supported on those older GPUs).

    DirectX 9 is slightly more in-tune with what the hardware actually supports, mainly because the spec was developed alongside the hardware. AFAIK, the only feature that's required but *not* supported is vertex texture fetch on R500-, which worked around the requirement on a technicality (the card claimed to support VTF but didn't expose any texture formats - which was was allowed by the spec in letter, if not in spirit).

    Comment


    • Originally posted by marek View Post
      Progress? Bullshit. There is no reason to demand GL3, there is no feature in GL3 you need for desktop effects.
      Theoretically, we don't even /need/ compositing for desktop effects. Shit, we don't /need/ desktop effects -at all-, since the poor guys using the VESA driver can't use them. I guess we don't really /need/ 3D support either. What got those things implemented? User demand and tons of hacking followed up by hard work. You're telling desktop developers to not raise the bar? That's irresponsible. Complacency is the worst enemy of progress.

      Originally posted by marek View Post
      Game consoles are stuck in the D3D9 level of features and there are lots of games with very impressive graphics.
      How are game consoles relevant to this, exactly?

      Originally posted by marek View Post
      I don't get why people keep repeating the same GL3 bullshit over and over again, yet still they have probably never read the GL3 spec. Even today, GL3 is still niche in the game industry, and that won't change anytime soon unless Intel and Apple implement GL3 in their drivers/systems, that's more than 50% of the market, plus there are tons of legacy ATI and NVIDIA hardware out there.
      Game consoles don't get hardware upgrades, and PCs generally use DX for gaming anyway. Where are most games now? DX9 compliance. But the work being done on DX9 is stagnant compared to the DX10 and DX11 APIs, because there -are- companies implementing those features. You're complaining that KDE is implementing GL3? Complain that Crysis uses DX10/DX11 while you're at it!

      Perhaps I'm reading this wrong: You're justifying the limited featureset of Open Drivers because game consoles and Apple haven't implemented these technologies? I don't really see those as reasons for not implementing features standardized years ago, in a desktop environment and already implemented in the blobs (which are the only officially supported drivers of the cards by their respective vendors anyway).

      You also seem to be telling me that when alot of people file and complain about a graphic bug, that doesn't push the development teams to get it fixed? lol! Look at compositing support- that was a feature that could be classified as completely unnecessary, and was extremely buggy out of the gate. Now if noone used it, would it have been triaged and prioritized as it did? Not a chance!

      Originally posted by marek View Post
      Driver developers already respond to much higher expectations than KDE could ever make. There are even games, you know.
      Games aren't desktop environments. A game wont make every graphic driver developer turn and say "hey, we need to fix these." A desktop environment? Maybe, since there are probably alot more people with these desktop environments on Linux than there are people playing X game with said features implemented. And if you think you're earning cred by belittling KDE's work then you're kidding yourself.

      Comment


      • Originally posted by kazetsukai View Post
        Theoretically, we don't even /need/ compositing for desktop effects. Shit, we don't /need/ desktop effects -at all-, since the poor guys using the VESA driver can't use them. I guess we don't really /need/ 3D support either. What got those things implemented? User demand and tons of hacking followed up by hard work. You're telling desktop developers to not raise the bar? That's irresponsible. Complacency is the worst enemy of progress.
        Give me a list of graphics techniques (requiring GL3) we should have in desktop effects. Then we can talk.

        Originally posted by kazetsukai View Post
        Complain that Crysis uses DX10/DX11 while you're at it!
        I can't complain about Crysis because I know the graphics algorithms it uses and I have already implemented some of them on R500 and can implement the rest of them if I had enough time to spare. Crysis mainly uses DX9, with the DX10 option being a result of their marketing strategy rather than a neccesity.

        Originally posted by kazetsukai View Post
        Perhaps I'm reading this wrong: You're justifying the limited featureset of Open Drivers because game consoles and Apple haven't implemented these technologies? I don't really see those as reasons for not implementing features standardized years ago, in a desktop environment and already implemented in the blobs (which are the only officially supported drivers of the cards by their respective vendors anyway).
        This has nothing to do with open drivers. I am talking from the position of a graphics developer which has experience with graphics algorithms and was actually paid for it, and analyzing what path needs to be taken to have the best desktop experience and the broadest hardware coverage with the most effects possible. GL3 is not part of that no matter what you might think. The age of a standard doesn't matter, the market share does. This is the practical point of view. I have described the uselessness of GL3 for the desktop effects from the technical point of view earlier, and illustrated on games that you can have very impressive graphics even with the oldest and crappiest APIs available.

        Comment


        • @kazetsukai: the point is that OpenGL 3.x does not offer any features over 2.1 that are relevant to desktop composition. Floating-point textures, R2VB, TBOs and UBOs, fences, cubemap filtering, OpenCL interop - none of those matters.

          The only features that are relevant are geometry instancing (a tiny performance improvement if you have thousands of windows open at the same time), multisampled textures (to implement antialiasing, although these will likely have bad side-effects to font rendering so they are not useful) and maybe geometry shaders (I can't imagine why but you never know).

          In short, OpenGL 3.x does not enable kwin to do anything that cannot be done in OpenGL 2.1. It won't make code simpler (since they'd have to keep the 2.1 and 1.5 fallbacks intact) and it would bring little new to the table.

          Finally, games *do* matter because they are the driving force behind graphics hardware and drivers. Games would benefit *massively* from wide-spread OpenGL 3.x support but we are not there yet. And if they can made do with OpenGL 2.1, then there's little reason why kwin cannot.

          My prediction: the next major kwin version will be able to instantiate OpenGL 3.x contexts but will not actually use them for anything just yet (same capabilities as 2.1 with a slightly cleaned up codepath - maybe). It will just lay down the foundation for future work.

          Comment


          • What's the difference between GLSL 1.2 and 1.3? I'm simply guessing that's why they are going OpenGL 3?

            Comment


            • Originally posted by V!NCENT View Post
              What's the difference between GLSL 1.2 and 1.3? I'm simply guessing that's why they are going OpenGL 3?
              GLSL 1.3 renames the "attribute" and "varying" keywords to "in" and "varying", respectively, in order to make room for more shader stages (geometry, etc). It also introduces derivatives (ddx, ddy), a few new texture functions (texture*lod that allows you to control the mipmap it samples from), deprecates built-in uniforms, modifies the syntax for fragment output for easier MRT and adds a couple of new keywords (meant for OpenGL ES compatibility).

              Again, none of those features is necessary - or even useful - to desktop composition. I can understand kwin wanting a GL3-compatible codebase for future work but I will be very surprised if it offers any new functionality (over GL2) in the immediate future. I certainly cannot see them dropping 50% of their user-base for no tangible benefit.

              Comment


              • Well, with radeon (R600c/g) and mesa git I even didn't noticed they raised the opengl requisites. It just worked.
                With the fucking intel driver it still doesn't work properly(and I'm using latest graphic stack from git). With latest stable intel driver I even cannot start kde to disable compositing!
                ## VGA ##
                AMD: X1950XTX, HD3870, HD5870
                Intel: GMA45, HD3000 (Core i5 2500K)

                Comment


                • Originally posted by BlackStar View Post
                  GLSL 1.3 renames the "attribute" and "varying" keywords to "in" and "varying", respectively, in order to make room for more shader stages (geometry, etc). It also introduces derivatives (ddx, ddy), a few new texture functions (texture*lod that allows you to control the mipmap it samples from), deprecates built-in uniforms, modifies the syntax for fragment output for easier MRT and adds a couple of new keywords (meant for OpenGL ES compatibility).
                  DDX and DDY are already in GLSL 1.1.

                  Comment


                  • Originally posted by marek View Post
                    DDX and DDY are already in GLSL 1.1.
                    Oops, just checked the specs and you are right. Although I kinda question their usefulness, since you 1.1 doesn't offer texture*Grad functions.

                    Fake edit to my previous post:
                    - "in" and "varying" should obviously be "in" and "out"
                    - texture*Lod should be texture*Grad. The former exists since GLSL 1.1 (maybe earlier) and allows you to select the mipmap level, but it's the latter that allows you to specify the gradients explicitly (necessary for artifact-free parallax mapping, for instance).

                    Comment


                    • And a slightly off-topic question: does Mesa implement the noise*() functions for GLSL?

                      Comment

                      Working...
                      X