Announcement

Collapse
No announcement yet.

LLVMpipe Gains Support For On-Disk Shader Cache

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • LLVMpipe Gains Support For On-Disk Shader Cache

    Phoronix: LLVMpipe Gains Support For On-Disk Shader Cache

    The LLVMpipe software OpenGL implementation that recently has seen work on MSAA, tessellation shader support, and other improvements, now has a working on-disk shader cache implementation...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Originally posted by atomsymbol
    Will there come a time when dedicated GPUs will be phased out because CPUs will be powerfull enough to render most OpenGL and Vulkan games at 4K resolution?

    For sure, it is impossible to happen with dual channel DDR4/DDR5 memory, or alternatively, with less than 1 gigabyte of L3 CPU cache.
    It is funny to think about if that is the case. Computers and GUIs started out without any real graphical acceleration. GPUs were an invention of necessity. But some day we might go full-circle and come back to only processing things on a single processor.

    I don't predict this happening with x86, but RISC-V has the potential to have something like a 5000 core "all-in-one" CPU.

    Comment


    • #3
      Originally posted by atomsymbol
      Will there come a time when dedicated GPUs will be phased out because CPUs will be powerfull enough to render most OpenGL and Vulkan games at 4K resolution?

      For sure, it is impossible to happen with dual channel DDR4/DDR5 memory, or alternatively, with less than 1 gigabyte of L3 CPU cache.
      It's economics. Today's CPUs can surely match dedicated GPU performance of n years ago, and even way back when, you could have bought enough CPU power to do the graphics, but it would have been way too expensive compared to the price of the GPU solution. That's still true. We have GPUs because for a very specific type of data processing, you get more bang for your buck with special purpose solutions. It's a waste of general purpose CPU/cache capability, and of energy too, possibly. To take another speciality processor as an example: hardware decoding uses less power than getting a general purpose CPU to do this task.

      Comment


      • #4
        Originally posted by atomsymbol

        ... considering that almost all x86 CPUs, except maybe the ultra-low-power ones, are in the midst of transitioning from fetching x86-encoded instructions to fetching from a µop cache which has an internal instruction encoding scheme ...
        Not sure if this is corception or ISAception. xD

        Comment


        • #5
          Originally posted by atomsymbol
          GPUs will be phased out
          Will not happen. Specialization exists for a reason, and computational requirements are constantly growing.

          Comment


          • #6
            Originally posted by atomsymbol
            Will there come a time when dedicated GPUs will be phased out because CPUs will be powerfull enough to render most OpenGL and Vulkan games at 4K resolution?
            never. you ridiculously expect people to both stop advancing games and caring for power usage/price

            Comment


            • #7
              Originally posted by atomsymbol
              From a technical viewpoint (that is: not taking economics into account for a moment), the main difference between GTX 560 (released in 2011) and Ryzen 3700X (released in 2019)
              the main difference is that nobody is going to buy gtx 560 in 2019, i.e. it's technically incapable. and your comparison is missing bunch of dedicated hardware completely absent in cpus
              Last edited by pal666; 11 June 2020, 08:01 PM.

              Comment


              • #8
                Originally posted by atomsymbol

                New special-purpose instructions are constantly being introduced to x86 CPUs. It might happen that the distance between a future x86 instruction set and the instruction set of a GPU will be so small the GPU will lose its comparative advantage.
                Anything that can be added to an x86 CPU can be done in a separate discrete card, with lots of additional supporting hardware around it that ensure it's going to handle the special case of graphics faster.

                So your question is essentially whether people will eventually stop caring about whether their GPU is fast and settle for something "good enough".

                I'd argue the answer to that question is self-evident.

                Comment

                Working...
                X