Page 1 of 4 123 ... LastLast
Results 1 to 10 of 38

Thread: Reasons Mesa 9.1 Is Still Disappointing For End-Users

  1. #1
    Join Date
    Jan 2007
    Posts
    14,371

    Default Reasons Mesa 9.1 Is Still Disappointing For End-Users

    Phoronix: Reasons Mesa 9.1 Is Still Disappointing For End-Users

    While Mesa 9.1 represents a number of improvements to this open-source graphics stack that were made over the past six months, as far as end-users are concerned, there's still a number of shortcomings...

    http://www.phoronix.com/vr.php?view=MTMwMTg

  2. #2
    Join Date
    Aug 2007
    Posts
    6,607

    Default

    I dont get what is so problematic with floating point. Even Debian enables it by default for since 2011-09-24.

    http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=635651

    s3tc is only a tiny problem - first of all when you use precompressed textures it does not matter at all when only s2tc is installed. s2tc could look a bit worse than s3tc but even Ubuntu ships it by default now - and current Kanotix test images do as well.

    Ok, there is OpenGL 4, but when you forget Unigine engine which Linux game uses it? Most likely no major game at all. I just want more complete OpenGL 3 support that at least Doom 3 BFG (via wine or RBDoom3) or Rage can run. Speed is a much more interesting thing - mainly Intel is too slow compared to Win drivers when you play games (if they run) - when you forget the extra simple OSS games - a game with > 60 fps @ 1080p is boring to benchmark anyway when it is locked to 60 fps without benchmark mode. Ok, you can buy 120 hz tft but how many use em? Nouveau is nice to have but it is not critical for gaming as you can still can use Nv 304 drivers for Geforce 6/7 on current distros. Ok, then when you look at radeon, those drivers are a requirement for cards below HD 2000 or when you want to use a newer xserver than 1.12. Basically you gain nothing with a new xserver for binary drivers so you could still downgrade it and you lose nothing - not so nice to do however but basically possible.

  3. #3
    Join Date
    Dec 2008
    Location
    Creve Coeur, Missouri
    Posts
    394

    Default

    To be fair, there is a reason that they stopped work on VA-API using shaders. It took more CPU and GPU power than it was worth. While it seemed like a good idea to begin with, the shaders themselves really aren't suited for the job.

  4. #4
    Join Date
    Oct 2007
    Posts
    1,259

    Default

    Unless there's some fundamental change to how Mesa is developed or the rate at which new OpenGL specifications are ratified, it will be quite a while before Mesa is caught up with the proprietary drivers that are already doing OpenGL 4.x.
    Mesa may be behind Khronos, but they will always be behind the spec. As Kano alluded to, it's more important to look at the gap between Mesa capability and the requirements of apps that most users want to run. That gap has been steadily decreasing over the years and having full OpenGL 3.3 support impresses me after seeing how long it took to get to full OpenGL 2.1. I don't know, but it seems that Mesa IS being developed at a faster rate

    Performance Parity with proprietary drivers
    Unrealistic goal, so of course anyone expecting this is going to be disappointed. I didn't find a million dollars on the sidewalk today either...

    Personally, the most disappointing thing is radeon VDPAU acceleration, because it's being held up by legal reasons. The fruit's in the tree, but a team of AMD lawyers are guarding it waiting to put their wingtips up anyone's rear who gets near it. Not cool.

  5. #5
    Join Date
    Jan 2007
    Posts
    418

    Default

    Quote Originally Posted by LinuxID10T View Post
    To be fair, there is a reason that they stopped work on VA-API using shaders. It took more CPU and GPU power than it was worth. While it seemed like a good idea to begin with, the shaders themselves really aren't suited for the job.
    I still don't understand the reasoning behind it. It's a VideoAudio API for decoding (and encoding?) of media. It doesn't have anything to do with shaders really. It can use shaders, but also can use hardware (intel only atm? with vpdau backend too). You should be able to plug in any decoder backend.

    Unless of course this is simply about a VA-API shader backend, to which the article wasn't clear and then my comment becomes moot

  6. #6
    Join Date
    Oct 2008
    Posts
    3,038

    Default

    Quote Originally Posted by oliver View Post
    I still don't understand the reasoning behind it. It's a VideoAudio API for decoding (and encoding?) of media. It doesn't have anything to do with shaders really. It can use shaders, but also can use hardware (intel only atm? with vpdau backend too). You should be able to plug in any decoder backend.

    Unless of course this is simply about a VA-API shader backend, to which the article wasn't clear and then my comment becomes moot
    The reasoning behind removing VAAPI was that no one was really working on it. Everyone was focusing on VDPAU, and since it provides (roughly) the same features that VAAPI was there didn't seem much point in keeping it around. Especially since, as i mentioned, no one was putting any effort into it.

    I suspect that if VDPAU support in Mesa ever really matures, someone will add VAAPI back in and utilize the same backends that are created for VDPAU. But that probably won't happen anytime soon, because the VDPAU support is still needs a lot of work before it can be considered done, and until then it makes sense for the video acceleration guys to keep working on it rather than spreading the work around elsewhere.

    The work on it for radeon hardware has focused on using shaders (at least publicly). There might be a private branch somewhere that uses the dedicated hardware, but who knows if that will ever see the light of day. I think the nouveau folks, though, were working on reverse engineering the purevideo hardware on nvidia cards. So they wouldn't have to use shaders except for post-processing, just like the proprietary drivers do.
    Last edited by smitty3268; 02-15-2013 at 04:05 AM.

  7. #7
    Join Date
    Sep 2010
    Posts
    655

    Default

    Michael you can add that some ditros disable Intel support for floating point textures. Could not believe it, but seen patches for mesa in Fedora Package Database

    No idea why Fedora do it (as its not working UNTILL you install lib that is not present in Fedora official repo...).

    ---

    Of course WE NEED OpenGL 4.x!!
    There are plenty of games ON WINDOWS that use DX11, READY FOR PORTING.

    However rise of MOBILE may shift focus on OpenGL ES 3.0, which is already supported in MESA.

  8. #8
    Join Date
    Sep 2012
    Posts
    279

    Default Two branches of OpenGL

    Quote Originally Posted by przemoli View Post
    However rise of MOBILE may shift focus on OpenGL ES 3.0, which is already supported in MESA.
    Exactly what I was thinking... If OpenGL ES 3.0 is a modern API with good features (including patent free texture compression), compatibility with both mobile and desktop, ... why have two OpenGL "branches" ?

  9. #9
    Join Date
    Sep 2010
    Posts
    655

    Default

    Quote Originally Posted by wargames View Post
    Exactly what I was thinking... If OpenGL ES 3.0 is a modern API with good features (including patent free texture compression), compatibility with both mobile and desktop, ... why have two OpenGL "branches" ?
    Because I define MOBILE as tight power budget, with even smaller battery.

    Even netbooks can have enought battery to sustain power-hungry gpu's. MOBILE can not.

    That is way we need "full" OpenGL, and "power efficient" OpenGL ES.

  10. #10
    Join Date
    Oct 2012
    Location
    Washington State
    Posts
    415

    Default

    Quote Originally Posted by przemoli View Post
    Because I define MOBILE as tight power budget, with even smaller battery.

    Even netbooks can have enought battery to sustain power-hungry gpu's. MOBILE can not.

    That is way we need "full" OpenGL, and "power efficient" OpenGL ES.
    OpenGL 4.3 was specifically designed to interoperate with OpenGL ES 3.0

    OpenGL 4.3

    Released on 6 August 2012
    Supported Cards: Nvidia GeForce 400 series, Nvidia GeForce 500 series, Nvidia GeForce 600 series, Nvidia GeForce 700 series

    • Compute shaders leveraging GPU parallelism within the context of the graphics pipeline
    • Shader storage buffer objects
    • Texture parameter queries
    • High-quality ETC2/EAC texture compression as a standard feature
    • Full compatibility with OpenGL ES 3.0 APIs
    • Debug capabilities to receive debugging messages during application development
    • Texture views for interpreting textures in different ways without data replication
    • Increased memory security
    • A multi-application robustness extension
    Doing the heavy lifting between 3.3 and 4.0 with the focus on 4.3 isn't work wasted. If Linux plans to ever garner desktop marketshare and thus create a robust native set of user applications that can even offer support contract services for new feature requests, etc., then MESA is the only chance for this to happen.

    Following the R600 branch natively now in LLVM makes it clear how much sooner OpenCL/OpenGL quality will arrive than if Tom never worked to get it into the mainline of LLVM/Clang. The extra input and commits by other members [including architecture design solutions] is well worth the effort. Linux will get it for free.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •