Announcement

Collapse
No announcement yet.

LLVMpipe Still Is Slow At Running OpenGL On The CPU

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by monkeynut View Post
    I wonder if it is possible to combine this renderer with the ATi/Nouveau renderers in a sort of SLI setup for a performance boost?
    You mean use LLVM for the FLOSS drivers? Isn't this already done?

    Or do you mean use LLVMpipe instead of Mesa softpipe to draw unsupported functions on older GPU's?

    Comment


    • #32
      I think Monkeynut means dividing the rendering work between CPU (with LLVMPipe) and GPU (using regular drivers), like Crossfire or SLI but with one real and one "fake" GPU.

      Quick answer is "yes in principle" but because of the overhead associated with splitting and recombining the rendering work it's usually only worth doing if the two renderers are fairly close in performance. In most cases the GPU would be a lot faster than the CPU renderer so the overhead of supporting multiple GPUs would probably match or outweigh the benefit from the additional performance.

      That doesn't make LLVMpipe any less cool, though
      Test signature

      Comment


      • #33
        Originally posted by Qaridarium
        only a 48core Opteron 6000 (155gb/s ramspeed)can beat a GPU (hd5870 160gb/s )

        a normal PC do have 5-15gb/s compared to an hd5870 160gb/s its very slow.

        this benchmark only show us this divergence
        That's an oversimplification. CPUs have larger caches than GPUs, so they won't typically need as much memory bandwidth. OTOH, GPUs do have more number crunching power...

        Personally, I would be fine with a software rasterizer, if that would drive my normal desktop use. It should also be much easier to get bugs out of it, since there isn't a multitude of incompatible hardware models to test it against.

        Comment


        • #34
          Oh, and regarding dynamic load balancing... Couldn't that in principle be used to power down most of the GPU when not needed? Something like Optimus, but CPU+GPU instead of IGP+GPU.

          Comment


          • #35
            Originally posted by Otus View Post
            That's an oversimplification. CPUs have larger caches than GPUs, so they won't typically need as much memory bandwidth.
            But 3D rendering memory access is typically horribly non-localised, which is one reason why GPUs don't bother with large caches: adding more processing capacity benefits them more than adding megabytes of cache.

            Comment


            • #36
              But what if the CPU would do just a fraction of the work in a 'SLI' configuration and sync afterwards? Each time the sync function would compare the difference in time spend rendering and adjusts the load dynamically?

              Comment

              Working...
              X