Announcement

Collapse
No announcement yet.

LLVMpipe Now Exposes OpenGL 4.2 For GL On CPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • dragon321
    replied
    Originally posted by airlied View Post

    a) it's not anything conformant, you can only be conformant once you pass the conformance tests and are listed on the official OpenGL site.

    b) it has implemented all the individual features that we wrapped up into GL 4.4, it doesn't mean it can advertise GL 4.4 until it has all the prior features done as well.

    It's likely GL 4.3 will be all it advertises once I've completed all the GL 4.5 features until it passes conformance.

    Dave.
    a) My bad, I've used word "conformant" as replacement for "meets requirements for". Sorry.

    b) I'm aware of this fact. I just pointed this jokingly.

    Good to know, thank you for answer.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by pal666 View Post
    but it's same massively parallel workload which is amenable to just throwing more cores at it. cpu will suck at it, but it will nicely scale with number of cores just like videocard
    Yes, and since the GPU have added hundreds of cores per generation while CPUs did not, performance expected on GPUs for this parallel workload has increased much faster than expected performance on CPU.

    That's what I said in the original post you quoted. That as time goes on, the result of CPU running LLVM and same age GPU rendering will get further and further apart in a performance graph

    And I'm still right
    Last edited by starshipeleven; 07-07-2020, 06:27 AM.

    Leave a comment:


  • pal666
    replied
    Originally posted by starshipeleven View Post
    it is GPU work, done on CPU.
    That's why CPU suck at it.
    but it's same massively parallel workload which is amenable to just throwing more cores at it. cpu will suck at it, but it will nicely scale with number of cores just like videocard

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by pal666 View Post
    how is llvmpipe not comparable workload for gpu?
    it is GPU work, done on CPU.
    That's why CPU suck at it.

    Leave a comment:


  • pal666
    replied
    Originally posted by starshipeleven View Post
    Nonsense. You are not using CPU and GPU with comparable workloads.
    how is llvmpipe not comparable workload for gpu?
    Originally posted by starshipeleven View Post
    I am right
    think again

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by pal666 View Post
    there's no difference between cpu and gpu progress with comparable workloads
    Nonsense. You are not using CPU and GPU with comparable workloads.

    because those people run single-threaded workloads. with multithreaded workloads new cpu is just as important as new gpu
    That's what I also said, CPUs have to keep high performance with single-threaded workloads because that's still a thing, GPUs don't so they can just add cores freely.

    You can add all cores you want in a CPU, only workstation users will notice past the 8 cores mark.

    you are wrong
    I am right

    Leave a comment:


  • pal666
    replied
    Originally posted by starshipeleven View Post
    CPU performance stopped jumping by 40-60% each gen and started going more towards the 15% in optimistic slides, while GPUs just added MOAR CORES as they always did (plus the usual architectural development) and kept increasing their power by the same pace.
    cpus add more cores just as gpus. there's no difference between cpu and gpu progress with comparable workloads
    Originally posted by starshipeleven View Post
    As exemplified by most gaming rigs, A LOT of people is still fine with Sandy/Ivy and slightly newer CPUs, but you still need to change GPU every 2 years at most if you want to stay on top of the graphics game.
    because those people run single-threaded workloads. with multithreaded workloads new cpu is just as important as new gpu
    Originally posted by starshipeleven View Post
    And this should translate in the fact that GPUs performance would increase MUCH faster than CPU-running-LLVMpipe performance.
    you are wrong

    Leave a comment:


  • Sonadow
    replied
    Said it before and will say it again: what llvmpipe exposes is not my concern. The real issue here is performance.

    I was setting up two Windows 10 installations a couple of weeks ago, one on a Skylake laptop with the standard Intel iGPU, the other on an Athlon 3000G with enbedded Vega graphics. Even without the GPU drivers, the Windows 10 GUI was able to fun at full speed with all those transparency and fade effects.

    On the other hand, Gnome 3 and Plasma Wayland were practically unusable on my dual-Xeon monster with 48 processor cores under llvmpipe.

    As for the fellow who gave "Just disable compositing" as a solution, go and actually use Plasma Wayland before commenting further.

    There really needs to be some form of performant CPU-backed compositing in Wayland compositors as a last resort, especially for computers that use Nvidia hardware with the Nouveau driver. QSG_RENDER_LOOP=basic is no guarantee that a Plasma Wayland session won't lock up under Nouveau, while Gnome doesn't even have the option to disable threaded GL rendering.
    Last edited by Sonadow; 07-07-2020, 05:30 AM.

    Leave a comment:


  • commodore256
    replied
    I'd love to see Linux Tech Tips (pun intended) do benchmarks on LLVM Pipe software rendering vs swiftshader and even more "gotta go fast" with gentoo buildtime optimizations.
    Last edited by commodore256; 07-06-2020, 07:01 PM.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by pal666 View Post
    i didn't check it, but i expect llvmpipe to be multithreaded just like gpus
    Does not change what I said, I was talking of CPU hardware development.

    CPU performance stopped jumping by 40-60% each gen and started going more towards the 15% in optimistic slides, while GPUs just added MOAR CORES as they always did (plus the usual architectural development) and kept increasing their power by the same pace.

    As exemplified by most gaming rigs, A LOT of people is still fine with Sandy/Ivy and slightly newer CPUs, but you still need to change GPU every 2 years at most if you want to stay on top of the graphics game.

    And this should translate in the fact that GPUs performance would increase MUCH faster than CPU-running-LLVMpipe performance.

    Leave a comment:

Working...
X