Announcement

Collapse
No announcement yet.

LLVMpipe Gains Support For On-Disk Shader Cache

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • LLVMpipe Gains Support For On-Disk Shader Cache

    Phoronix: LLVMpipe Gains Support For On-Disk Shader Cache

    The LLVMpipe software OpenGL implementation that recently has seen work on MSAA, tessellation shader support, and other improvements, now has a working on-disk shader cache implementation...

    http://www.phoronix.com/scan.php?pag...k-Shader-Cache

  • #2
    Will there come a time when dedicated GPUs will be phased out because CPUs will be powerfull enough to render most OpenGL and Vulkan games at 4K resolution?

    For sure, it is impossible to happen with dual channel DDR4/DDR5 memory, or alternatively, with less than 1 gigabyte of L3 CPU cache.

    Comment


    • #3
      Originally posted by atomsymbol View Post
      Will there come a time when dedicated GPUs will be phased out because CPUs will be powerfull enough to render most OpenGL and Vulkan games at 4K resolution?

      For sure, it is impossible to happen with dual channel DDR4/DDR5 memory, or alternatively, with less than 1 gigabyte of L3 CPU cache.
      It is funny to think about if that is the case. Computers and GUIs started out without any real graphical acceleration. GPUs were an invention of necessity. But some day we might go full-circle and come back to only processing things on a single processor.

      I don't predict this happening with x86, but RISC-V has the potential to have something like a 5000 core "all-in-one" CPU.

      Comment


      • #4
        Originally posted by schmidtbag View Post
        I don't predict this happening with x86, but RISC-V has the potential to have something like a 5000 core "all-in-one" CPU.
        ... considering that almost all x86 CPUs, except maybe the ultra-low-power ones, are in the midst of transitioning from fetching x86-encoded instructions to fetching from a µop cache which has an internal instruction encoding scheme ...

        Comment


        • #5
          Originally posted by atomsymbol View Post
          Will there come a time when dedicated GPUs will be phased out because CPUs will be powerfull enough to render most OpenGL and Vulkan games at 4K resolution?

          For sure, it is impossible to happen with dual channel DDR4/DDR5 memory, or alternatively, with less than 1 gigabyte of L3 CPU cache.
          It's economics. Today's CPUs can surely match dedicated GPU performance of n years ago, and even way back when, you could have bought enough CPU power to do the graphics, but it would have been way too expensive compared to the price of the GPU solution. That's still true. We have GPUs because for a very specific type of data processing, you get more bang for your buck with special purpose solutions. It's a waste of general purpose CPU/cache capability, and of energy too, possibly. To take another speciality processor as an example: hardware decoding uses less power than getting a general purpose CPU to do this task.

          Comment


          • #6
            Originally posted by timrichardson View Post
            It's economics. Today's CPUs can surely match dedicated GPU performance of n years ago, and even way back when, you could have bought enough CPU power to do the graphics, but it would have been way too expensive compared to the price of the GPU solution. That's still true. We have GPUs because for a very specific type of data processing, you get more bang for your buck with special purpose solutions. It's a waste of general purpose CPU/cache capability, and of energy too, possibly. To take another speciality processor as an example: hardware decoding uses less power than getting a general purpose CPU to do this task.
            From a technical viewpoint (that is: not taking economics into account for a moment), the main difference between GTX 560 (released in 2011) and Ryzen 3700X (released in 2019) is the memory bandwidth:

            GTX 560:Ryzen 3700X:The working set of a typical year-2011 OpenGL game is most likely much larger than 32 MB, so the size of Ryzen's L3 cache is unimportant and the performance will be limited by the memory bandwidth.

            Once llvmpipe achieves OpenGL 4.x support and the measured IPC (instructions per clock) is about 0.1 we will know for sure that it is limited by memory bandwidth. On the other hand, if the measured IPC will be about 1.0 and llvmpipe still isn't able to match GTX 560 in say Tomb Raider (released in 2013) then llvmpipe is limited by something else, for example by the absence of texture filtering instructions on the CPU.
            Last edited by atomsymbol; 06-11-2020, 02:20 AM. Reason: Add Ryzen L3 cache bandwidth

            Comment


            • #7
              Originally posted by atomsymbol View Post

              ... considering that almost all x86 CPUs, except maybe the ultra-low-power ones, are in the midst of transitioning from fetching x86-encoded instructions to fetching from a µop cache which has an internal instruction encoding scheme ...
              Not sure if this is corception or ISAception. xD

              Comment


              • #8
                Originally posted by atomsymbol View Post
                GPUs will be phased out
                Will not happen. Specialization exists for a reason, and computational requirements are constantly growing.

                Comment


                • #9
                  Originally posted by atomsymbol View Post
                  Will there come a time when dedicated GPUs will be phased out because CPUs will be powerfull enough to render most OpenGL and Vulkan games at 4K resolution?
                  never. you ridiculously expect people to both stop advancing games and caring for power usage/price

                  Comment


                  • #10
                    Originally posted by atomsymbol View Post
                    From a technical viewpoint (that is: not taking economics into account for a moment), the main difference between GTX 560 (released in 2011) and Ryzen 3700X (released in 2019)
                    the main difference is that nobody is going to buy gtx 560 in 2019, i.e. it's technically incapable. and your comparison is missing bunch of dedicated hardware completely absent in cpus
                    Last edited by pal666; 06-11-2020, 08:01 PM.

                    Comment

                    Working...
                    X