Page 1 of 2 12 LastLast
Results 1 to 10 of 38

Thread: Reasons Mesa 9.1 Is Still Disappointing For End-Users

Hybrid View

  1. #1
    Join Date
    Jan 2007
    Posts
    15,193

    Default Reasons Mesa 9.1 Is Still Disappointing For End-Users

    Phoronix: Reasons Mesa 9.1 Is Still Disappointing For End-Users

    While Mesa 9.1 represents a number of improvements to this open-source graphics stack that were made over the past six months, as far as end-users are concerned, there's still a number of shortcomings...

    http://www.phoronix.com/vr.php?view=MTMwMTg

  2. #2
    Join Date
    Aug 2007
    Posts
    6,641

    Default

    I dont get what is so problematic with floating point. Even Debian enables it by default for since 2011-09-24.

    http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=635651

    s3tc is only a tiny problem - first of all when you use precompressed textures it does not matter at all when only s2tc is installed. s2tc could look a bit worse than s3tc but even Ubuntu ships it by default now - and current Kanotix test images do as well.

    Ok, there is OpenGL 4, but when you forget Unigine engine which Linux game uses it? Most likely no major game at all. I just want more complete OpenGL 3 support that at least Doom 3 BFG (via wine or RBDoom3) or Rage can run. Speed is a much more interesting thing - mainly Intel is too slow compared to Win drivers when you play games (if they run) - when you forget the extra simple OSS games - a game with > 60 fps @ 1080p is boring to benchmark anyway when it is locked to 60 fps without benchmark mode. Ok, you can buy 120 hz tft but how many use em? Nouveau is nice to have but it is not critical for gaming as you can still can use Nv 304 drivers for Geforce 6/7 on current distros. Ok, then when you look at radeon, those drivers are a requirement for cards below HD 2000 or when you want to use a newer xserver than 1.12. Basically you gain nothing with a new xserver for binary drivers so you could still downgrade it and you lose nothing - not so nice to do however but basically possible.

  3. #3
    Join Date
    Dec 2008
    Location
    Creve Coeur, Missouri
    Posts
    402

    Default

    To be fair, there is a reason that they stopped work on VA-API using shaders. It took more CPU and GPU power than it was worth. While it seemed like a good idea to begin with, the shaders themselves really aren't suited for the job.

  4. #4
    Join Date
    Jan 2007
    Posts
    418

    Default

    Quote Originally Posted by LinuxID10T View Post
    To be fair, there is a reason that they stopped work on VA-API using shaders. It took more CPU and GPU power than it was worth. While it seemed like a good idea to begin with, the shaders themselves really aren't suited for the job.
    I still don't understand the reasoning behind it. It's a VideoAudio API for decoding (and encoding?) of media. It doesn't have anything to do with shaders really. It can use shaders, but also can use hardware (intel only atm? with vpdau backend too). You should be able to plug in any decoder backend.

    Unless of course this is simply about a VA-API shader backend, to which the article wasn't clear and then my comment becomes moot

  5. #5
    Join Date
    Oct 2008
    Posts
    3,183

    Default

    Quote Originally Posted by oliver View Post
    I still don't understand the reasoning behind it. It's a VideoAudio API for decoding (and encoding?) of media. It doesn't have anything to do with shaders really. It can use shaders, but also can use hardware (intel only atm? with vpdau backend too). You should be able to plug in any decoder backend.

    Unless of course this is simply about a VA-API shader backend, to which the article wasn't clear and then my comment becomes moot
    The reasoning behind removing VAAPI was that no one was really working on it. Everyone was focusing on VDPAU, and since it provides (roughly) the same features that VAAPI was there didn't seem much point in keeping it around. Especially since, as i mentioned, no one was putting any effort into it.

    I suspect that if VDPAU support in Mesa ever really matures, someone will add VAAPI back in and utilize the same backends that are created for VDPAU. But that probably won't happen anytime soon, because the VDPAU support is still needs a lot of work before it can be considered done, and until then it makes sense for the video acceleration guys to keep working on it rather than spreading the work around elsewhere.

    The work on it for radeon hardware has focused on using shaders (at least publicly). There might be a private branch somewhere that uses the dedicated hardware, but who knows if that will ever see the light of day. I think the nouveau folks, though, were working on reverse engineering the purevideo hardware on nvidia cards. So they wouldn't have to use shaders except for post-processing, just like the proprietary drivers do.
    Last edited by smitty3268; 02-15-2013 at 04:05 AM.

  6. #6
    Join Date
    Sep 2010
    Posts
    706

    Default

    Michael you can add that some ditros disable Intel support for floating point textures. Could not believe it, but seen patches for mesa in Fedora Package Database

    No idea why Fedora do it (as its not working UNTILL you install lib that is not present in Fedora official repo...).

    ---

    Of course WE NEED OpenGL 4.x!!
    There are plenty of games ON WINDOWS that use DX11, READY FOR PORTING.

    However rise of MOBILE may shift focus on OpenGL ES 3.0, which is already supported in MESA.

  7. #7
    Join Date
    Oct 2007
    Posts
    1,290

    Default

    Unless there's some fundamental change to how Mesa is developed or the rate at which new OpenGL specifications are ratified, it will be quite a while before Mesa is caught up with the proprietary drivers that are already doing OpenGL 4.x.
    Mesa may be behind Khronos, but they will always be behind the spec. As Kano alluded to, it's more important to look at the gap between Mesa capability and the requirements of apps that most users want to run. That gap has been steadily decreasing over the years and having full OpenGL 3.3 support impresses me after seeing how long it took to get to full OpenGL 2.1. I don't know, but it seems that Mesa IS being developed at a faster rate

    Performance Parity with proprietary drivers
    Unrealistic goal, so of course anyone expecting this is going to be disappointed. I didn't find a million dollars on the sidewalk today either...

    Personally, the most disappointing thing is radeon VDPAU acceleration, because it's being held up by legal reasons. The fruit's in the tree, but a team of AMD lawyers are guarding it waiting to put their wingtips up anyone's rear who gets near it. Not cool.

  8. #8
    Join Date
    Oct 2008
    Posts
    104

    Default

    Quote Originally Posted by DanL View Post
    Personally, the most disappointing thing is radeon VDPAU acceleration, because it's being held up by legal reasons. The fruit's in the tree, but a team of AMD lawyers are guarding it waiting to put their wingtips up anyone's rear who gets near it. Not cool.
    Blame Hollywood and their stupid "Everything must be copy protected even though no sane pirate would bother trying to read content out of the video card buffers therefore documentation for the video decoding hardware would be bad" BS.

  9. #9
    Join Date
    May 2007
    Location
    Nurnberg.
    Posts
    327

    Default

    Quote Originally Posted by DanL View Post
    Unrealistic goal, so of course anyone expecting this is going to be disappointed. I didn't find a million dollars on the sidewalk today either...
    WRONG!

    As i have proven with Q3A, there is no real reason for this to be so. The lima driver will have performance which matches the binary driver, the only thing which can slow us down is the design of Mesa itself, and that can be kicked and kicked until it behaves.

    That some hw designs are depending on software optimization that is the problem of the hw design though.

  10. #10
    Join Date
    Oct 2008
    Posts
    3,183

    Default

    Quote Originally Posted by libv View Post
    That some hw designs are depending on software optimization that is the problem of the hw design though.
    There's a pretty big difference between hardware that costs $5 to produce, and hardware that costs $500 to produce.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •