Announcement

Collapse
No announcement yet.

LLVMpipe's Geometry Processing Pipeline Kicks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • garytr24
    replied
    yes, llvm was used this way by Apple in their opengl stack.

    Programming, Web Development, and DevOps news, tutorials and tools for beginners to experts. Hundreds of free publications, over 1M members, totally free.

    Leave a comment:


  • not.sure
    replied
    Wasn't LLVM also planned to be used to compile/optimize/generate shader code for specific GPUs? Or is that something entirely different?

    Leave a comment:


  • rohcQaH
    replied
    bandwidth isn't the problem, latency and synchronisation is. While the buffer is being copied around, neither the GPU nor the CPU can work on it, everything stalls.

    That's no problem when it's done a few times per frame, but it can get really bothersome if you do it in the later stages of the pipeline where you may need a buffer-copy for every object or even every triangle you're drawing.


    edit: oh, I shouldn't go afk for an hour between writing and submitting a post

    Leave a comment:


  • smitty3268
    replied
    I'm pretty sure the CPU has to copy everything into main memory before it can operate on it. And the problem isn't bandwidth, which there is plenty of, but latency. Just like seek times kill performance on HDDs even when you're getting small files that don't saturate the bandwidth.

    Leave a comment:


  • cb88
    replied
    Originally posted by rohcQaH View Post
    yes, but you have to be very careful to avoid buffer-ping-pong. When the CPU draws something, the data has to be moved to main memory, when the GPU draws something, it has to be moved back to video memory. Those moves are slooooow.

    The rendering pipeline looks something like this (oversimplified):

    openGL-call -> geometry shaders -> vertex shaders -> pixel shaders -> final image

    On a modern GPU, the red stage is done on the CPU, everything after that on the GPU. You can shift the early phases to the CPU, but alternating between CPU and GPU-calculations kills performance - it may end up slower than doing full software-rendering.
    That would be very true for AGP I don't know how bad it would be on PCI-E though... theres a lot more bandwidth to work with also can the cpu directly map PCI-E or does it have to copy things from buffers as you say

    Leave a comment:


  • rohcQaH
    replied
    Originally posted by wswartzendruber View Post
    Can a state tracker use both hardware and softpipe?
    yes, but you have to be very careful to avoid buffer-ping-pong. When the CPU draws something, the data has to be moved to main memory, when the GPU draws something, it has to be moved back to video memory. Those moves are slooooow.

    The rendering pipeline looks something like this (oversimplified):

    openGL-call -> geometry shaders -> vertex shaders -> pixel shaders -> final image

    On a modern GPU, the red stage is done on the CPU, everything after that on the GPU. You can shift the early phases to the CPU, but alternating between CPU and GPU-calculations kills performance - it may end up slower than doing full software-rendering.

    Leave a comment:


  • smitty3268
    replied
    I believe the core functions of this code was added to a shared module that both the softpipe and hardware drivers can access. So it's not so much that both the hardware and softpipe are being used at the same time, but rather just a shared piece of code that any driver, including softpipe, can access.

    Leave a comment:


  • wswartzendruber
    replied
    Can a state tracker use both hardware and softpipe?

    Leave a comment:


  • smitty3268
    replied
    Originally posted by bridgman View Post
    AFAIK the main place this can affect hardware drivers is when running on integrated GPUs which don't have vertex processing (TCL) hardware, eg rs4xx and rs6xx.
    And intel chips, if they ever get gallium drivers running.

    Leave a comment:


  • bridgman
    replied
    AFAIK the main place this can affect hardware drivers is when running on integrated GPUs which don't have vertex processing (TCL) hardware, eg rs4xx and rs6xx. The previous software TCL code was *much* slower than the corresponding code in fglrx; this new code should bring that aspect of open source driver performance up to roughly fglrx levels.

    There were some problems running 300g on rs4xx/rs6xx parts in the past (ie it didn't work), not sure of current status so some additional work may be needed before this becomes usable on those IGP parts.

    re: OpenGL 3 and higher on 3xx-5xx parts, I'm not sure what the current thinking is. My expectation had been that only GL 2.x would be exposed (so the app would use code paths which were fully hardware implemented) but I don't know if anyone has looked hard at the implications of trying to run higher levels of GL on those parts.

    Leave a comment:

Working...
X