Announcement

Collapse
No announcement yet.

LLVMpipe Now Exposes OpenGL 4.2 For GL On CPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by dragon321 View Post
    It's funny how llvmpipe is conformant with OpenGL 4.4 but not 4.3 because one extension is missing. Well, I suppose it won't be very long until llvmpipe catch OpenGL 4.5 or even 4.6.

    Zink also achieves new milestones pretty quick. It's two extensions away from OpenGL 3.1. A lot of "modern OpenGL" applications are targeting at least 3.3 so it will be nice when Zink achieve this milestone.
    a) it's not anything conformant, you can only be conformant once you pass the conformance tests and are listed on the official OpenGL site.

    b) it has implemented all the individual features that we wrapped up into GL 4.4, it doesn't mean it can advertise GL 4.4 until it has all the prior features done as well.

    It's likely GL 4.3 will be all it advertises once I've completed all the GL 4.5 features until it passes conformance.

    Dave.

    Comment


    • #22
      Originally posted by LinAGKar View Post
      Except there are 11 items required for OpenGL ES 3.1.
      Sorry, nevermind, I was looking at the wrong column.

      Comment


      • #23
        Originally posted by starshipeleven View Post
        Since for GPUs the "muh singlethred puhfomance" argument never applied, even when CPUs hit a brick wall and Moore's Law died (somewhere in the Sandy/Ivybridge era) GPUs have kept increasing their performance each generation at more or less the same speed.
        i didn't check it, but i expect llvmpipe to be multithreaded just like gpus

        Comment


        • #24
          Originally posted by pal666 View Post
          i didn't check it, but i expect llvmpipe to be multithreaded just like gpus
          Does not change what I said, I was talking of CPU hardware development.

          CPU performance stopped jumping by 40-60% each gen and started going more towards the 15% in optimistic slides, while GPUs just added MOAR CORES as they always did (plus the usual architectural development) and kept increasing their power by the same pace.

          As exemplified by most gaming rigs, A LOT of people is still fine with Sandy/Ivy and slightly newer CPUs, but you still need to change GPU every 2 years at most if you want to stay on top of the graphics game.

          And this should translate in the fact that GPUs performance would increase MUCH faster than CPU-running-LLVMpipe performance.

          Comment


          • #25
            I'd love to see Linux Tech Tips (pun intended) do benchmarks on LLVM Pipe software rendering vs swiftshader and even more "gotta go fast" with gentoo buildtime optimizations.
            Last edited by commodore256; 07-06-2020, 07:01 PM.

            Comment


            • #26
              Said it before and will say it again: what llvmpipe exposes is not my concern. The real issue here is performance.

              I was setting up two Windows 10 installations a couple of weeks ago, one on a Skylake laptop with the standard Intel iGPU, the other on an Athlon 3000G with enbedded Vega graphics. Even without the GPU drivers, the Windows 10 GUI was able to fun at full speed with all those transparency and fade effects.

              On the other hand, Gnome 3 and Plasma Wayland were practically unusable on my dual-Xeon monster with 48 processor cores under llvmpipe.

              As for the fellow who gave "Just disable compositing" as a solution, go and actually use Plasma Wayland before commenting further.

              There really needs to be some form of performant CPU-backed compositing in Wayland compositors as a last resort, especially for computers that use Nvidia hardware with the Nouveau driver. QSG_RENDER_LOOP=basic is no guarantee that a Plasma Wayland session won't lock up under Nouveau, while Gnome doesn't even have the option to disable threaded GL rendering.
              Last edited by Sonadow; 07-07-2020, 05:30 AM.

              Comment


              • #27
                Originally posted by starshipeleven View Post
                CPU performance stopped jumping by 40-60% each gen and started going more towards the 15% in optimistic slides, while GPUs just added MOAR CORES as they always did (plus the usual architectural development) and kept increasing their power by the same pace.
                cpus add more cores just as gpus. there's no difference between cpu and gpu progress with comparable workloads
                Originally posted by starshipeleven View Post
                As exemplified by most gaming rigs, A LOT of people is still fine with Sandy/Ivy and slightly newer CPUs, but you still need to change GPU every 2 years at most if you want to stay on top of the graphics game.
                because those people run single-threaded workloads. with multithreaded workloads new cpu is just as important as new gpu
                Originally posted by starshipeleven View Post
                And this should translate in the fact that GPUs performance would increase MUCH faster than CPU-running-LLVMpipe performance.
                you are wrong

                Comment


                • #28
                  Originally posted by pal666 View Post
                  there's no difference between cpu and gpu progress with comparable workloads
                  Nonsense. You are not using CPU and GPU with comparable workloads.

                  because those people run single-threaded workloads. with multithreaded workloads new cpu is just as important as new gpu
                  That's what I also said, CPUs have to keep high performance with single-threaded workloads because that's still a thing, GPUs don't so they can just add cores freely.

                  You can add all cores you want in a CPU, only workstation users will notice past the 8 cores mark.

                  you are wrong
                  I am right

                  Comment


                  • #29
                    Originally posted by starshipeleven View Post
                    Nonsense. You are not using CPU and GPU with comparable workloads.
                    how is llvmpipe not comparable workload for gpu?
                    Originally posted by starshipeleven View Post
                    I am right
                    think again

                    Comment


                    • #30
                      Originally posted by pal666 View Post
                      how is llvmpipe not comparable workload for gpu?
                      it is GPU work, done on CPU.
                      That's why CPU suck at it.

                      Comment

                      Working...
                      X