Announcement

Collapse
No announcement yet.

LLVMpipe With Intel's GLSL2 Compiler

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • LLVMpipe With Intel's GLSL2 Compiler

    Phoronix: LLVMpipe With Intel's GLSL2 Compiler

    Last month we tested out Intel's new GLSL compiler for Mesa when running the ATI Radeon classic Mesa and Gallium3D drivers to see how this GL Shading Language compiler designed by Intel employee's for their hardware and open-source driver work for the other open-source drivers, since all of the Mesa drivers will be affected once this "GLSL2" compiler is merged into the Mesa code-base by month's end. The experience using Intel's new shader compiler with the ATI Radeon graphics driver worked fine except for Warsow where serious regressions were visible, but in the other games that are capable of running off Mesa, the experience was fine. What we have been curious to test since then with this new OpenGL shader compiler has been the LLVMpipe driver -- a Gallium3D driver we have been very excited about as it finally provides a better software rasterizer for Linux by leveraging Gallium3D and the Low-Level Virtual Machine (LLVM) compiler infrastructure for accelerating the Mesa state tracker atop a modern multi-core CPU that supports SSE4 instructions. We have now finished running tests of the Intel's GLSL2 branch with the most recent LLVMpipe driver code.

    http://www.phoronix.com/vr.php?view=15197

  • #2
    Actually, regarding the texture-from-pixmap glx extension, that is NOT NECESSARILY REQUIRED for compiz to operate! The 0.9 branch of compiz uses the "copytex" plugin in its place.

    http://forum.compiz.org/viewtopic.ph...t=10402#p75616

    In other words, you can build the 0.9 branch and test again... you might get compiz-on-cpu.

    Comment


    • #3
      Originally posted by droidhacker View Post
      Actually, regarding the texture-from-pixmap glx extension, that is NOT NECESSARILY REQUIRED for compiz to operate! The 0.9 branch of compiz uses the "copytex" plugin in its place.

      http://forum.compiz.org/viewtopic.ph...t=10402#p75616

      In other words, you can build the 0.9 branch and test again... you might get compiz-on-cpu.
      Oh yeah, didn't think about Compiz 0.9 yet. Will give that a shot.
      Michael Larabel
      http://www.michaellarabel.com/

      Comment


      • #4
        so its basically up to 30% performance loss but 1-5% less energy used? heureka!
        is it normalized for the same amount of frames? so virtually Frames/Joule or Joule/Frame?

        Comment


        • #5
          You realize Open Arena doesn't even use GLSL right? Or I assume it doesn't, as it uses a fairly stock ioQuake3 engine, and ioQuake3 doesn't use GLSL without being patched since last I checked. Which was a few minutes ago. Also, it doesn't *look* like it uses GLSL.

          In fact, do any of those games use GLSL? I'm not sure that any of them do. (Anyone should feel free to correct me on this.)

          Maybe this compiler is also involved in the other types of shaders or it has some kind of "idle overhead," that's the only way I can explain the performance differences measured in other games.

          Comment


          • #6
            Originally posted by MaxToTheMax View Post
            You realize Open Arena doesn't even use GLSL right? Or I assume it doesn't, as it uses a fairly stock ioQuake3 engine, and ioQuake3 doesn't use GLSL without being patched since last I checked. Which was a few minutes ago. Also, it doesn't *look* like it uses GLSL.
            I grabbed the timedemo and config PTS uses and ran it using MESA_GLSL=log which dumps all shaders used to files, and ended up with nothing - so OpenArena doesn't seem to use shaders...

            Regarding the breakage in Warsow, I still can't find a bug filed about this, that's kind of a disappointment.

            Comment


            • #7
              I can't seem to find any GLSL usage in Tremulous either?

              Comment


              • #8
                This bug will break shaders in gallium - probably a lot of them.

                https://bugs.freedesktop.org/show_bug.cgi?id=29490

                While the article is interesting, the code is from 10 days ago and that might already be out of date. Intel has been doing tons of work fixing bugs and optimizing GLSL2 in order to get it ready for merging this Friday.

                Comment


                • #9
                  low cpu usage

                  Why wouldn't you want the cpu to be pegged at 100%? Something's wrong if it isn't, especially since rendering is supposed to be easily parallelizable. Every game I've ever played uses as much cpu as it can. In this case of software rendering, it seems like something's going wrong, and maybe data isn't moving around as fast as it needs to be and clogging the pipes.

                  Comment


                  • #10
                    I'm pessimistic about GLSL from Intel, because it's from Intel who for years couldn't even make their Linux drivers perform decent compared to its window$ counterpart. I wish the Nvidia devs were doing this work instead, not flames, I really think so.

                    Comment


                    • #11
                      Originally posted by garytr24 View Post
                      Why wouldn't you want the cpu to be pegged at 100%? Something's wrong if it isn't, especially since rendering is supposed to be easily parallelizable. Every game I've ever played uses as much cpu as it can. In this case of software rendering, it seems like something's going wrong, and maybe data isn't moving around as fast as it needs to be and clogging the pipes.
                      Because these games are low-end ancient technology bases. If they are pegging out 100% of a modern CPU, the game is horrendously poorly written, and/or you have vsync turned off and you're just burning power to render frames you will never see.

                      People also forget about little things like laptops and such, where extraneous power usage is more than just a little higher electric bill and environment hostility. It actually decreases how long you can play the game for on an plane or train or automobile. (Or boat, zeppelin, gondola, etc.)

                      You can easily see the difference between an experienced and intelligent game developer and ones who think like you when you play games on an iPhone or the like. I have played gorgeous 3D games that let the battery last a good 6 hours and I have also played silly little 2D games with no crazy effects or anything that drain the battery in 2.5 hours.

                      A game's simulation has a maximum rate at which it needs to run. Twitch-heavy shooters and even most other action games usually want to peg out at your monitor's refresh rate, but many other games have no need to run any faster than 30 FPS, and then only for the sake of maintaining some UI/character animations while waiting for player input.

                      Even for twitch-heavy games like the Quake-based games, there's no need to run any faster than the monitor's refresh rate. I keep hearing "pro gamers" (read: losers whose knowledge of game internals and hardware comes from gamer forums filled with other clueless losers all parroting each others' old wives' tales about performance and latency) talk about how the latency of vsync hurts their games, but that's just a load of crap. The maximum 1/60th of a second of latency you might get is dwarfed by 1/20th (or higher, in many cases) of a second of latency that exists between user input to visible output that even the best gaming PC still has. Usually, people complaining that vsync causes latency are people who don't always win, refuse to accept that they're not the God of Gaming, and scapegoat everything they can: they'll end up turning off every feature and spending $1,000's on high-end computer equipment, and then just continue to yell and scream every time they get fragegd that their Internet connection lag-spiked or they got distracted or the other guy is hax0ring. They're pretty much only happy when they're fighting other gamers who are really good but not quite as good as they are, thus making the game appear challenging even though there's a small chance of losing. You see similar people in non-computer games as well, like amateur sports teams or the majority of US martial arts dojos. Not people you should listen to. Ever.

                      Comment


                      • #12
                        I don't considder myself a pro gamer or anything, and the vsync in games I have played probably were sucky implementations, but I seriously experience vsync lag in the same way I notice wireless Logitech G7 mouse lag after playing with a mouse-o-phile Razer mouse for a long time. Just like with rockets in Quake 3 the mind adapts in a few minutes, but still.

                        Call me whatever you want but before I even knew what vsync was I imediatly turned it back of due to some feeling of lag =x

                        Comment


                        • #13
                          Originally posted by elanthis View Post
                          Because these games are low-end ancient technology bases. If they are pegging out 100% of a modern CPU, the game is horrendously poorly written, and/or you have vsync turned off and you're just burning power to render frames you will never see.
                          That's only true if you have hardware acceleration. When everything is being software rendered on the CPU, and the framerates aren't that high (did you even read the article?) you really do want the CPU to be maxed out to try and get more frame rate.

                          The compiler bits that Intel is writing here should try to remain as low-cpu as possible, of course, but the renderer that is running the code generated by it doesn't seem to be able to fully take advantage of all the resources it can.

                          Comment


                          • #14
                            Well you need to rethink that. The CPU usage going down would have been great if FPS had remained the same or went up... in this case both went down in games that were more likly to be hitting the code indicating that something is being done just wrong from a performance standpoint. But that is to be expected its a brand new addition right which will recieve maintainace from all driver developers if merged?

                            Comparing FPS/CPU% vs FPS/CPU% and raw FPS numbers would be far more interesting than comparing just CPU percentages which tell you nothing about the amount of work acutally done.... sure your battery lasted 10min longer but you were rendering at 13FPS instead of 20FPS or somesuch.

                            Comment


                            • #15
                              TFP support for llvmpipe, softpipe, swrastg:
                              http://lists.freedesktop.org/archive...st/001949.html

                              Comment

                              Working...
                              X