Announcement

Collapse
No announcement yet.

LLVMpipe rocks :)

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by whizse View Post
    We're talking about _llvmpipe_ - a software rasterizer. Think of it as an emulated graphics card, the GPU does not enter in to it.
    Yes, and no you were not:
    Originally posted by whizse View Post
    Maybe you can, but don't you have more interesting uses for your CPU? Any application stressing the CPU would be instantly noticeable in the GUI.
    Please don't make me cry

    Originally posted by whizse View Post
    And I must say, being called a troll when all I do is trying to figure out what your point was... isn't nice.
    My apologies. I was just.... feeling like this: http://www.userfriendly.org/cartoons...p/uf014320.gif

    Comment


    • #22
      Using llvmpipe to run a composited window manager = slow GUI.

      Have you actually tried using llvmpipe? Any additional stress on the CPU hits the performance hard. It's easy to try it out by yourself. Run you fav game in llvmpipe, try launching a couple of CPU intensive tasks - boom performance drops to the floor. Do the same test on a normal setup - the performance impact is negligible. Now imagine that the game instead is kwin or compiz....


      Keep in mind, sarcasm, irony and snarky remarks doesn't translate well in text. There might be a language barrier here too, english isn't my first language. It would be a lot easier if you just said what you wanted to say instead of replying by posting comics.

      Comment


      • #23
        I already said it

        OK, one more time;
        You said: if an application stresses the CPU (right? a random app which is demanding a lot of CPU time, OK?), then it will show in the GUI performance, right?

        OK, so you implied that was only the case with LLVM.

        Now my point: before offloading to the GPU (let that do the heavy lifting), the CPU must pre-proces/arrange the data to be send to the GPU.

        Now what I was saying: even if you didn't use LLVMpipe, but were to use a GPU driver, then if the CPU is stressed by an app; it would also instantly show degraded GUI performance.

        Do you understand what I'm getting at?

        Comment


        • #24
          Heh, at this point it becomes quite obvious that you haven't tried this out yourself.

          The performance penalty is several orders of magnitude greater when you're doing the rendering on the CPU rather than offloading it to the GPU.

          I'll say it again just to be sure, the time spent processing data (on the CPU) before it gets sent off to the GPU is negligible compared to the time spent rendering it on the CPU.

          Don't take my word for it, run a few tests yourself, I don't think kwin/compiz/g-s is usuable with llvmpipe yet, but any game or demo should suffice to prove my point.

          I guess this was a misunderstanding when it comes to theory versus practice. Yes - in theory you get a slowdown in both cases - but there's a clear difference between a microseconds delay and "oh my god why is my mouse cursor taking three seconds to move from one side of the screen to the other".

          Comment


          • #25
            By the way, it's quite easy to build and run llvmpipe, and you run no risk of screwing up your system, just follow the README in the lllvmpipe directory.

            Comment


            • #26
              Originally posted by kbios View Post
              I tested Extreme Tux Racer (same specs as above) and both were nearly playable at 10-15 fps depending on resolution. That's strange, but it seems llvmpipe's speed depends only on resolution, and not on the number of objects/detail/filtering. Anyone can explain this?
              I get 13 fps on Extreme Tux Racer with highest settings at 800x600 and 7 fps at 1280x800.
              ## VGA ##
              AMD: X1950XTX, HD3870, HD5870
              Intel: GMA45, HD3000 (Core i5 2500K)

              Comment


              • #27
                Originally posted by darkbasic View Post
                I get 13 fps on Extreme Tux Racer with highest settings at 800x600 and 7 fps at 1280x800.
                Precisely. But if you put everything to the lowest details (except resolution), I think you'll get exactly the same fps. That's what I find strange.

                Comment


                • #28
                  Originally posted by darkbasic View Post
                  I use kde 4.5. Its compositor do work only with proprietary drivers because it makes use of opengl2 extensions still not supported by open drivers. So I simply do not use it at all and I don't know if it works with LLVMpipe. Anyway kwin 4.4 should run quite well even with llvmpipe.
                  KDE 4.5 works fine with r600c from Mesa. Including blur.

                  It did take some twiddling, though, and I had to remove some blacklist entries by hand. It was probably some effect I was using before that barfed up. Starting with a clean KWin setup (deleting kwinrc) and configuring from there fixed it.

                  Comment


                  • #29
                    Originally posted by pingufunkybeat View Post
                    KDE 4.5 works fine with r600c from Mesa. Including blur.
                    I have a working R600c setup too, but with intel there is no chance. Even disabling blur and lanczos -.-
                    ## VGA ##
                    AMD: X1950XTX, HD3870, HD5870
                    Intel: GMA45, HD3000 (Core i5 2500K)

                    Comment


                    • #30
                      Higher resolution means more parallel processing (amount of pixels).

                      On a CPU that would be pretty much a serial bottleneck, causing the FPS drop

                      Comment

                      Working...
                      X