Announcement

Collapse
No announcement yet.

The Pipe-Dream Persists About Pairing LLVMpipe With GPU Hardware/Drivers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • The Pipe-Dream Persists About Pairing LLVMpipe With GPU Hardware/Drivers

    Phoronix: The Pipe-Dream Persists About Pairing LLVMpipe With GPU Hardware/Drivers

    More than a few times over the years various (generally new) Linux users have come forward to profess their "new" idea for improving open-source Linux GPU driver performance: the CPU-based LLVMpipe should work in tandem with a graphics card's hardware driver to deliver better performance...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    i won't say pipe dream but it is certainly challenging, actually the implementation is the easy part(compared to the other problem im pointing out) because the real complex part is how to avoid the circular load problem, i mean any 3d work you pass to the CPU is going to crush it to oblivion(performance wise) which in turn will trash the command submission to the GPU making things actually slower.

    My advice if someone have the free time to take on a cool project is don't try to balance render operations between GPU/CPU but actually offload highly serial work that is actually faster on the CPU(especially on HSA cases since the buffers are accesible by both), Texture convertion/compression, preproccesing or certain postproccessing cheap enough that worth it, some form of fast edge antialiasing enhancer(kinda like enb injector does for DX9 games, yeah ok skyrim), in resume pass to llvmpipe only cheap operations that can improve queality or speed but never thrash the CPU enough to mess with the command submission and screw your GPU.

    btw as a side note Linux have an syscall to attach threads to specific cores(in case you didn't know) which could help to avoid trashing core 0 and mess with the GPU

    Comment


    • #3
      citing his glxgears results
      LMAO. That should have automatically closed the report/request.

      Comment


      • #4
        We need the ability for Mesa to share work between GPUs in the system before you start trying to offload to the CPU.

        Hopefully that will be easier with Vulkan, since it doesn't have global state like OpenGL.

        Comment


        • #5
          What could somewhat work I think is processing different shader stages on different devices, running vertex shaders on llvmpipe and fragment shaders on the gpu. Although vertex shaders are usually not very demanding, so the speedup would be probably marginal.

          Comment


          • #6
            CPUs are good at serial, non-vector based workloads.
            GPUs are good at parallel, vector based workloads.

            Knowing this, why would you want to offload parallel, vector based workloads from the GPU to the CPU?

            Comment


            • #7
              Originally posted by gamerk2 View Post
              CPUs are good at serial, non-vector based workloads.
              GPUs are good at parallel, vector based workloads.

              Knowing this, why would you want to offload parallel, vector based workloads from the GPU to the CPU?
              Just because CPUs are good at serial workloads, doesn't mean that they are bad at parallel workloads.

              This is more about utilizing the available processing power, like having a crappy gpu combined with a powerful cpu for whatever reason. I know a very unlikely scenario nowadays.

              Comment


              • #8
                Could that enable OpenGL extension on hardware that don't support it? If so that could be really really good, since only non supported operation would run on the CPU instead of the whole program.
                Last edited by gufide; 15 January 2016, 03:52 PM.

                Comment


                • #9
                  LOL at Marek's response:

                  Originally posted by marek
                  I'll just tell you my opinion.

                  This is a bad idea.

                  It can't work.

                  It will never work.

                  It's doomed to fail.

                  Don't waste your time.

                  You need to study how most games draw things. You'll see rendering to textures a lot, not so much to the main framebuffer. A shader can read any texel from a texture that was rendered to. This means that any pixel from any render target must be available to all clients which can read it as a texture. This is why split-screen rendering fails. Alternate frame rendering has the same issue if there are inter-frame dependencies, e.g. a render target is only updated every other frame (reflections/mirrors), or there is motion blur, or a rain hitting the camera/screen, which is an incremental process, or any other incremental screen-space effect. This is why all hybrid solutions don't come up to expectations.

                  Get over it. Move on.

                  Comment


                  • #10
                    Originally posted by smitty3268 View Post
                    LOL at Marek's response:
                    LOL and this guy is still going at it with his comments.... Grab some popcorn and beer this weekend and read it.
                    Michael Larabel
                    https://www.michaellarabel.com/

                    Comment

                    Working...
                    X