Announcement

Collapse
No announcement yet.

LLVMpipe Still Doesn't Work For Linux Gaming

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • phoronix
    started a topic LLVMpipe Still Doesn't Work For Linux Gaming

    LLVMpipe Still Doesn't Work For Linux Gaming

    Phoronix: LLVMpipe Still Doesn't Work For Linux Gaming

    For those curious what OpenGL gaming frame-rates are like if trying to run LLVMpipe on the latest Intel Ivy Bridge processors, here are some numbers...

    http://www.phoronix.com/vr.php?view=MTEwODM

  • aceman
    replied
    Originally posted by curaga View Post
    glxinfo will say which driver you have, and you can force softpipe with the GALLIUM_DRIVER=softpipe var IIRC.
    OK, so I have this:
    OpenGL vendor string: VMware, Inc.
    OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 0x300)
    OpenGL version string: 2.1 Mesa 8.1-devel
    OpenGL shading language version string: 1.20

    But I have tested it just now on celestia and it now has 5 threads (on 4 core cpu) which I haven't noticed before. They not using the cores at 100%, but about 25% each and the program is slow. But I haven't noticed this threading before so there may be progress. Maybe it is due to me upgrading to X.org server 1.12 and llvm 3.0 recently.

    Leave a comment:


  • curaga
    replied
    glxinfo will say which driver you have, and you can force softpipe with the GALLIUM_DRIVER=softpipe var IIRC.

    I seem to recall there was a var to limit llvmpipe's threads too, but not sure on that.

    Leave a comment:


  • aceman
    replied
    Originally posted by sandy8925 View Post
    Are multiple threads being used or only a single thread?
    This would interest me too. I always read here how the LLVMpipe is better than the old softpipe in using all cores and new CPU instructions. But I never see that in real usage. If I kill direct rendering (e.g. by chmod -w /dev/dri/card*) there should be SW rendering. I only compile LLVMpipe and R600 (both gallium3D) so I think the SW rendering should be using LLVMpipe. But no 3D program is shown in top as using more threads to use all cores (and the programs are rendering slowly).

    What am I doing wrong? Some bad arguments to Mesa build? Or is this only starting at some specific LLVM version?

    Leave a comment:


  • airlied
    replied
    Originally posted by smitty3268 View Post
    Not really, no.

    Well, the next major milestone is GL 3 support. It seems like it's pretty close, so hopefully it makes it as part of the Mesa 8.1 release, but no one has actually committed to making sure that happens.


    I think the main thing is just adding new features, like GL3. I'm not sure anyone has thought about the best way to bring OpenCL support to it yet.

    There was that one project to add a kernel side to the driver, which would let it avoid making a bunch of memory copies that it currently has to do. I'm not sure what the status of that was, if it's in with some of the DMA-BUF work or what. Beyond that, I don't think anyone is particularly focused on the performance of the driver. Just adding new features seems to be what most people are looking at.
    There isn't really an llvmpipe roadmap, nobody is really pushing features on it at the moment, vmware seem to be using it but no new major speedups have shown up.

    I was doing GL3 support as a spare time project, but my spare time decided it would rather do something else, so I might get back to it eventually,

    The kernel stuff was only for making texture-from-pixmap faster so gnome-shell can go faster, it doesn't make llvmpipe itself go faster at all.

    Dave.

    Leave a comment:


  • c0d1f1ed
    replied
    I think these results are actually quite impressive. Sure, it isn't matching the performance of dedicated hardware, but did anyone seriously expect that at this point? The gap between the CPU and GPU is remarkably small considering that the CPU is a fully generic processor.

    I am particularly interested in what this might hold for the future. The AVX2 instruction set extension has four times the floating-point vector throughput, two times the integer vector throughput, and gather instructions which replace 18 legacy instructions!

    So what are the LLVMpipe developers' thoughts on the future of real-time CPU rendering?

    Leave a comment:


  • dnebdal
    replied
    Originally posted by allquixotic View Post
    LLVMpipe just uses LLVM's optimizing compiler (although how much optimization it does is debatable, considering the poor performance of its generated code at least for x86 binaries)
    Uhm? The x86 output is perfectly decent, as long as it's not using openMP - slightly slower than the best gcc results, but very seldom anything you'd actually notice in a real-life situation.

    Leave a comment:


  • madbiologist
    replied
    And now for a repeat of these LLVMpipe benchmarks with LLVM 3.1?

    Leave a comment:


  • FireBurn
    replied
    Originally posted by allquixotic View Post
    Video acceleration would be pointless, since you could write an equally-fast (or faster) software decoder for the formats you want, or just use ffmpeg's, which are probably going to be the fastest software decoders available.

    Again, the whole point of llvmpipe is that there's no GPU hardware being used, so any features you can think of that are obviated by existing software solutions (such as video decoding) are probably not going to be worked on.
    I'd say it's quite the opposite - support gets added to llvmpipe to compare and debug the hardware drivers. That's llvmpipes real purpose - not games

    Leave a comment:


  • allquixotic
    replied
    Originally posted by uid313 View Post
    Does LLVMpipe use GEM? TTM? KMS? video acceleration?
    KMS is something that would be used by your DDX, not really by LLVMpipe. LLVMpipe just uses LLVM's optimizing compiler (although how much optimization it does is debatable, considering the poor performance of its generated code at least for x86 binaries) to back OpenGL calls and produce "efficient" native code to run on the CPU for these calls. It could probably support other state trackers, but right not there's no point.

    It doesn't use/support GEM or TTM because that would require an LLVMpipe kernel module which doesn't exist (yet?).

    Video acceleration would be pointless, since you could write an equally-fast (or faster) software decoder for the formats you want, or just use ffmpeg's, which are probably going to be the fastest software decoders available.

    Again, the whole point of llvmpipe is that there's no GPU hardware being used, so any features you can think of that are obviated by existing software solutions (such as video decoding) are probably not going to be worked on.

    Leave a comment:

Working...
X