Announcement

Collapse
No announcement yet.

Radeon Driver Picks Up VBOs, OQ Support

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Just to put things into perspective: Nexuiz heavily uses GLSL, but not a single loop in GLSL.

    Comment


    • #17
      Originally posted by BlackStar View Post
      If a GPU doesn't support GLSL I force it down the fixed-function pipeline and forget about it (which includes every Intel GPU currently in existence.)
      Intel GPUs don't support GLSL?
      I have an X4500HD, and according to glxinfo it has OpenGL 2.1. I don't know a whole lot about OpenGL, but I do know that I am getting hardware acceleration on my Intel chip because it can run Urban Terror.
      Is it software based GLSL or something?

      Comment


      • #18
        And another quick question.
        When they refer to r300, does that include r400 and r500 like r600 can refer to rv770?

        Comment


        • #19
          Originally posted by pvtcupcakes View Post
          And another quick question.
          When they refer to r300, does that include r400 and r500 like r600 can refer to rv770?
          The r300 3D driver supports r3xx, r4xx, and r5xx chips. However, the r5xx hardware supports more advanced shader instructions like loops. So when GLSL support is added, r5xx chips will be able to do more things in hardware than r3xx/r4xx chips.

          Comment


          • #20
            Originally posted by osiris View Post
            R300 cards owners won't benefit (performance wise) from GLSL support actually, because R300 chips don't support loops and branches. While branches can be rewritten, if an app will try to use loops in GLSL programs the driver will just fallback to software. The best the R300 can do is ARB_vertex/fragment_program.
            Not true. Loops with a constant number of iterations can be unrolled and so do branches. Let's have a branch if(c){x=a;}else{x=b;}. This can be rewritten in GLSL as x=a*c+b*!c or simply x=mix(b,a,c). In other words, both code blocks should be evaluated and irrelevant results discarded if branching is not supported. This will still be much faster than software fallback. This is how proprietary drivers work and it's a must.

            Comment


            • #21
              Originally posted by Eosie View Post
              Not true. Loops with a constant number of iterations can be unrolled and so do branches.
              Wait, what? -funroll-loops all over again except this time with GPU's? Better tell all Gentoo users.
              Last edited by nanonyme; 08-17-2009, 03:55 PM.

              Comment


              • #22
                Originally posted by pvtcupcakes View Post
                Intel GPUs don't support GLSL?
                I have an X4500HD, and according to glxinfo it has OpenGL 2.1. I don't know a whole lot about OpenGL, but I do know that I am getting hardware acceleration on my Intel chip because it can run Urban Terror.
                Is it software based GLSL or something?
                Let's put it this way: I've yet to encounter a single Intel driver that manages to render anything more complex than "hello world" 3d graphics. Results range from buggy rendering to outright kernel crashes.

                In an interesting twist of fate, Intel's Linux drivers are ahead in OpenGL support compared to their Windows counterparts. On Windows you cannot even use features like FBOs or (shudders) PBuffers! It's not that the hardware is not capable, but that the drivers are severely lacking. Imagine Ati's OpenGL drivers 6 years ago - in a single word, "bad".

                Take a look at Google Earth's config files, it's an interesting glimpse in the world of 3d programming and how much 3d drivers suck outside of AMD and Nvidia. Last time I looked at it, they pretty much forced all Intel GPUs to use D3D rendering unconditionally (through some rather colorful comments).

                Regarding Urban Terror, chances are vertex shaders are evaluated on your CPU. Most Intel hardware is lacking hardware vertex shaders (only the last two generations have them and until the last one, it was often faster to run them on the CPU instead of the GPU).

                Comment


                • #23
                  Originally posted by BlackStar View Post
                  Let's put it this way: I've yet to encounter a single Intel driver that manages to render anything more complex than "hello world" 3d graphics. Results range from buggy rendering to outright kernel crashes.
                  I call BS. Well, maybe you've not encountered it, but with 2.6.31 kernel, Mesa 7.5, and 2.8.0 Intel xorg driver, I can run games through wine and linux native games (e.g. nwn, ut2004) without any problem. No rendering errors. No random crashes.

                  Comment


                  • #24
                    well, i have kernel 2.6.31, mesa 7.6 and 2.8.0 intel xorg, i get the same crashes in wine (trackmania, GTAVC, odd world domination grafikdemo etc.) as with 2.6.30 + with 2.6.31 the screen is corrupted in 640x480 when using OpenGL, all these games and demos runs fine with 2.6.0, mesa 7.4 and kernel 2.6.28 :\

                    Comment


                    • #25
                      That's too bad, but have you filed bugs?

                      The Intel devs are in general very responsive and regressions are usually not that hard to track down (git bisect).

                      Comment


                      • #26
                        Originally posted by crumja View Post
                        I call BS. Well, maybe you've not encountered it, but with 2.6.31 kernel, Mesa 7.5, and 2.8.0 Intel xorg driver, I can run games through wine and linux native games (e.g. nwn, ut2004) without any problem. No rendering errors. No random crashes.
                        Yeah, tell that to my users: "change your operating system, install a beta kernel and my program will work. Promise!"

                        No, wait, it won't. Every single Intel release manages to break *something*, as CME kindly points out. The same holds true on the windows side, too.

                        My solution: until Intel creates an OpenGL implementation that actually *works*, down the OpenGL 1.1 path they go (along with S3, Sis and every other shitty implementation out there). Even software rendering tends to work better than those drivers.

                        Yes, I sound bitter because I *am* bitter. I've spent more time debugging and working around Intel driver issues than every other vendor *combined*. It doesn't help that their Windows drivers are even worse than their Linux counterparts, either.

                        Comment


                        • #27
                          As whizse said, if you really do see bugs, file a bug report. Which extensions are you using that aren't working properly? Are you absolutely sure it's a regression in the drivers and not a problem in your configuration or application?

                          As for beta code, the only beta piece of software I am using is the kernel. Btw, switching back to 2.6.30 works as well. The only time when the Intel driver has broken something for me is the 2.6 series, which was released in order to get features out to certain customers within a deadline. Since then, the devs have worked hard to stabilize the driver.

                          Comment


                          • #28
                            Originally posted by crumja View Post
                            As whizse said, if you really do see bugs, file a bug report. Which extensions are you using that aren't working properly? Are you absolutely sure it's a regression in the drivers and not a problem in your configuration or application?
                            Easier said than done. I do not have access to Intel hardware, so I have to rely on my users for data. Creating test cases is tricky or downright impossible (the user never answers back or you have a hard deadline and all you can do is patch the code and pray for the best.)

                            Still, I do report bugs to all major IHVs regularly.

                            As for application vs driver errors, if something works on AMD and Nvidia but fails on Intel and glGetError() returns blank, I am inclined to blame the Intel drivers. Especially if the error involves a blue screen or kernel panic.

                            The ARB is also to blame here - a comprehensive, mandatory OpenGL conformance test should have been a priority along with OpenGL 3.0. However, they are still the same old inefficient design commitee that botched OpenGL 2.0, 3.0 and every major version in-between (VBOs, GLSL, FBOs, geometry shaders - every major addition to OpenGL has been mismanaged and delayed for ridiculous amounts of time. It's 2009 and EXT_anisotropy still hasn't made it to core, good job!)

                            Comment


                            • #29
                              Okay, I guess (I and others who have replied?) come from another perspective. After all, I'm just one guy, I don't have users to worry about fortunately

                              Anyway, for me, the Intel drivers have steadily improved with the latest releases, git master of Mesa very much so.

                              It's still frightfully easy to cause the GPU to hang, needing to reboot. But tools such as intel_gpu_dump have made it much easier to create good bug reports and the developers are very friendly and responsive.

                              Comment


                              • #30
                                Originally posted by nanonyme View Post
                                Wait, what? -funroll-loops all over again except this time with GPU's? Better tell all Gentoo users.
                                Hey man, I resemble that remark! ;D

                                Comment

                                Working...
                                X