Announcement

Collapse
No announcement yet.

LLVMpipe With Intel's GLSL2 Compiler

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by garytr24 View Post
    Why wouldn't you want the cpu to be pegged at 100%? Something's wrong if it isn't, especially since rendering is supposed to be easily parallelizable. Every game I've ever played uses as much cpu as it can. In this case of software rendering, it seems like something's going wrong, and maybe data isn't moving around as fast as it needs to be and clogging the pipes.
    Because these games are low-end ancient technology bases. If they are pegging out 100% of a modern CPU, the game is horrendously poorly written, and/or you have vsync turned off and you're just burning power to render frames you will never see.

    People also forget about little things like laptops and such, where extraneous power usage is more than just a little higher electric bill and environment hostility. It actually decreases how long you can play the game for on an plane or train or automobile. (Or boat, zeppelin, gondola, etc.)

    You can easily see the difference between an experienced and intelligent game developer and ones who think like you when you play games on an iPhone or the like. I have played gorgeous 3D games that let the battery last a good 6 hours and I have also played silly little 2D games with no crazy effects or anything that drain the battery in 2.5 hours.

    A game's simulation has a maximum rate at which it needs to run. Twitch-heavy shooters and even most other action games usually want to peg out at your monitor's refresh rate, but many other games have no need to run any faster than 30 FPS, and then only for the sake of maintaining some UI/character animations while waiting for player input.

    Even for twitch-heavy games like the Quake-based games, there's no need to run any faster than the monitor's refresh rate. I keep hearing "pro gamers" (read: losers whose knowledge of game internals and hardware comes from gamer forums filled with other clueless losers all parroting each others' old wives' tales about performance and latency) talk about how the latency of vsync hurts their games, but that's just a load of crap. The maximum 1/60th of a second of latency you might get is dwarfed by 1/20th (or higher, in many cases) of a second of latency that exists between user input to visible output that even the best gaming PC still has. Usually, people complaining that vsync causes latency are people who don't always win, refuse to accept that they're not the God of Gaming, and scapegoat everything they can: they'll end up turning off every feature and spending $1,000's on high-end computer equipment, and then just continue to yell and scream every time they get fragegd that their Internet connection lag-spiked or they got distracted or the other guy is hax0ring. They're pretty much only happy when they're fighting other gamers who are really good but not quite as good as they are, thus making the game appear challenging even though there's a small chance of losing. You see similar people in non-computer games as well, like amateur sports teams or the majority of US martial arts dojos. Not people you should listen to. Ever.

    Comment


    • #12
      I don't considder myself a pro gamer or anything, and the vsync in games I have played probably were sucky implementations, but I seriously experience vsync lag in the same way I notice wireless Logitech G7 mouse lag after playing with a mouse-o-phile Razer mouse for a long time. Just like with rockets in Quake 3 the mind adapts in a few minutes, but still.

      Call me whatever you want but before I even knew what vsync was I imediatly turned it back of due to some feeling of lag =x

      Comment


      • #13
        Originally posted by elanthis View Post
        Because these games are low-end ancient technology bases. If they are pegging out 100% of a modern CPU, the game is horrendously poorly written, and/or you have vsync turned off and you're just burning power to render frames you will never see.
        That's only true if you have hardware acceleration. When everything is being software rendered on the CPU, and the framerates aren't that high (did you even read the article?) you really do want the CPU to be maxed out to try and get more frame rate.

        The compiler bits that Intel is writing here should try to remain as low-cpu as possible, of course, but the renderer that is running the code generated by it doesn't seem to be able to fully take advantage of all the resources it can.

        Comment


        • #14
          Well you need to rethink that. The CPU usage going down would have been great if FPS had remained the same or went up... in this case both went down in games that were more likly to be hitting the code indicating that something is being done just wrong from a performance standpoint. But that is to be expected its a brand new addition right which will recieve maintainace from all driver developers if merged?

          Comparing FPS/CPU% vs FPS/CPU% and raw FPS numbers would be far more interesting than comparing just CPU percentages which tell you nothing about the amount of work acutally done.... sure your battery lasted 10min longer but you were rendering at 13FPS instead of 20FPS or somesuch.

          Comment


          • #15
            TFP support for llvmpipe, softpipe, swrastg:

            Comment

            Working...
            X