Announcement

Collapse
No announcement yet.

LLVMpipe Still Is Slow At Running OpenGL On The CPU

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • V!NCENT
    replied
    But what if the CPU would do just a fraction of the work in a 'SLI' configuration and sync afterwards? Each time the sync function would compare the difference in time spend rendering and adjusts the load dynamically?

    Leave a comment:


  • movieman
    replied
    Originally posted by Otus View Post
    That's an oversimplification. CPUs have larger caches than GPUs, so they won't typically need as much memory bandwidth.
    But 3D rendering memory access is typically horribly non-localised, which is one reason why GPUs don't bother with large caches: adding more processing capacity benefits them more than adding megabytes of cache.

    Leave a comment:


  • Otus
    replied
    Oh, and regarding dynamic load balancing... Couldn't that in principle be used to power down most of the GPU when not needed? Something like Optimus, but CPU+GPU instead of IGP+GPU.

    Leave a comment:


  • Otus
    replied
    Originally posted by Qaridarium
    only a 48core Opteron 6000 (155gb/s ramspeed)can beat a GPU (hd5870 160gb/s )

    a normal PC do have 5-15gb/s compared to an hd5870 160gb/s its very slow.

    this benchmark only show us this divergence
    That's an oversimplification. CPUs have larger caches than GPUs, so they won't typically need as much memory bandwidth. OTOH, GPUs do have more number crunching power...

    Personally, I would be fine with a software rasterizer, if that would drive my normal desktop use. It should also be much easier to get bugs out of it, since there isn't a multitude of incompatible hardware models to test it against.

    Leave a comment:


  • bridgman
    replied
    I think Monkeynut means dividing the rendering work between CPU (with LLVMPipe) and GPU (using regular drivers), like Crossfire or SLI but with one real and one "fake" GPU.

    Quick answer is "yes in principle" but because of the overhead associated with splitting and recombining the rendering work it's usually only worth doing if the two renderers are fairly close in performance. In most cases the GPU would be a lot faster than the CPU renderer so the overhead of supporting multiple GPUs would probably match or outweigh the benefit from the additional performance.

    That doesn't make LLVMpipe any less cool, though

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by monkeynut View Post
    I wonder if it is possible to combine this renderer with the ATi/Nouveau renderers in a sort of SLI setup for a performance boost?
    You mean use LLVM for the FLOSS drivers? Isn't this already done?

    Or do you mean use LLVMpipe instead of Mesa softpipe to draw unsupported functions on older GPU's?

    Leave a comment:


  • monkeynut
    replied
    I wonder if it is possible to combine this renderer with the ATi/Nouveau renderers in a sort of SLI setup for a performance boost?

    Leave a comment:


  • Michael
    replied
    Yes Intel IGP is already done as part of next comparison. Mesa Software Rasterizer was thrown out though since it can't break 1FPS.

    Leave a comment:


  • eescar
    replied
    what about including Intel IGP ?

    Originally posted by jrch2k8 View Post
    ... in any case compare it against an intel crappy igp

    btw push 28 fps cpu only is quite an achievement
    I think it could be interesting to compare this software driver with an intel IGP as it may be the #1 client for this kind of driver.
    I personnally own a 9800 GT and will never think about using this software driver (and for any Nvidia/Ati solution); but on M/B with intel integrated video, for me, it make sense to compare it with any kind of intel dedicated drivers (open/closed source) or other software solution like Mesa.

    Hope it will come with the next Mesa soft. vs LLVMpipe bench.

    Leave a comment:


  • Svartalf
    replied
    Originally posted by FunkyRider View Post
    Dynamic load balancing, we already have that, it's called SLI
    No, this is a smidge different and hasn't even really been done yet. Larabee MIGHT have brought the start of that sort of thing to the picture, but sadly it's not going to be hitting the market as a discrete GPU, now is it?

    Leave a comment:

Working...
X