Announcement

Collapse
No announcement yet.

XvBA on Evergreen

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • bridgman
    started a topic XvBA on Evergreen

    XvBA on Evergreen

    There was an active discussion about this going on in an NVidia forum; figure we should probably move it back to the ATI/AMD forum so everyone can find it easily.

    Current status in a nutshell is that our internal testing (which focuses on the NDA version of the API) indicates that XvBA is working on Evergreen, while end user experience (using gbeauche's VA-API adapter) is that Evergreen support does not work. The problem seems to be related to a specific function call used when running XvBA decode with GL output, which gbeauche apparently identified some time ago.

    I don't know if the problem has been reproduced in the multimedia driver team; will try to chase that down next week.

  • gbeauche
    replied
    Originally posted by Kano View Post
    It can not share that much code with win because 10-7 can decode h264 l 5.1 with vlc now. Never saw that on linux.
    The code is shared, they clearly don't have the resources like Intel or NVIDIA to write another core implementation from scratch for Linux. The fact you don't see the expected results on Linux doesn't mean the capability is not there... Linux support is "just glue code". That glue code can:

    1. not be available to Linux
    2. be available to Linux and
    2.1. working
    2.1.1. by default
    2.1.2. through a specific configuration
    2.1.3. partially (missing cases in the implementation)
    2.2. not working (bugs)

    You have no means to be certain which case applies here though. i.e. you just can't say (1) because you don't see the expected results in VLC... Your conclusion, as stated in your quote, is innerly broken. You just can't say that. You can't define a supposed and absolute truth simply from an example. This is illogical. No matter we are talking about XvBA or not.

    Look for other features of the driver, there are many symbols that can suggest X or Y but we can't be certain that (1), (2.1.1), (2.1.2), (2.1.3) or (2.2) applies to each simply by testing a random application.

    Leave a comment:


  • curious.developer
    replied
    Originally posted by Dandel View Post
    The main thing to watch out for is the cpu and video card... Most current processors (Like the Athlon II x4) have enough power to stay at the lowest speed stepping and still decode 1080p h264 video with very few dropped frames. Anyways, It'd be interesting to see 1080p and 720p videos that are encoded with h264 ( or AC1) be able to be decoded with the radeon hd 5000 series graphics card.
    I've bought the Athlon II X4 600e (still waiting for delivery) - sucky price/performance ratio but promises a low power consumption and a low temperature. We will see...

    That said, I want to put on the HTPC some dev tools (like Mercurial or Redmine) so it would be nice to offload to the GPU as much as possible.

    Leave a comment:


  • gbeauche
    replied
    Originally posted by bridgman View Post
    Can you explain what you mean by "oversized" here - I might be oversimplifying your question.

    IIRC the "rectangle=1" setting uses the GL_ARB_texture_rectangle extension rather than GL_ARB_texture_non_power_of_two but I haven't heard about any problems with that option up to the hardware limits of the GPU (8K x 8K on 6xx/7xx I think). I don't remember trying rectangle=2 though...
    OK, I thought mplayer was also trying to use GL_ARB_texture_non_power_of_two by default so the fact that you said to try GL_ARB_texture_rectangle made me think that the former probably would create a texture larger than the user-requested size, internally under certain conditions.

    Hum, put it simpler: does an NPOT texture always match user-requested size internally? Sure, he will always get his values back through glGetTexLevelParameteri() but does this always mean the driver uses a texture of that exact same size internally, even for small ones (<128x128)?

    Leave a comment:


  • Kano
    replied
    @gbeauche

    It can not share that much code with win because 10-7 can decode h264 l 5.1 with vlc now. Never saw that on linux.

    Leave a comment:


  • Dandel
    replied
    Originally posted by curious.developer View Post
    Hi

    To mr. Bridgman - is there any real chance to get XvBA on Linux? How much? 100%? Probably yes? Dunno? Probably not? Any timeframe?

    ATM I'm building a HTPC solution and I've bought HD5450. A grave mistake it looks. My fault anyway.

    Now, I could buy some G210 gfx. Or I could wait a little bit. Should I? ;-)
    The main thing to watch out for is the cpu and video card... Most current processors (Like the Athlon II x4) have enough power to stay at the lowest speed stepping and still decode 1080p h264 video with very few dropped frames. Anyways, It'd be interesting to see 1080p and 720p videos that are encoded with h264 ( or AC1) be able to be decoded with the radeon hd 5000 series graphics card.

    Anyways, with that said, i would fully call my desktop fully ready as a htpc solution. Of course, i would also like to state that if your using xbmc make sure it's newer than revision 30566 because that specifically fixes the bugs with the texture matrices and glsl.

    Leave a comment:


  • bridgman
    replied
    Can you explain what you mean by "oversized" here - I might be oversimplifying your question.

    IIRC the "rectangle=1" setting uses the GL_ARB_texture_rectangle extension rather than GL_ARB_texture_non_power_of_two but I haven't heard about any problems with that option up to the hardware limits of the GPU (8K x 8K on 6xx/7xx I think). I don't remember trying rectangle=2 though...

    Leave a comment:


  • gbeauche
    replied
    Originally posted by bridgman View Post
    You might want to play with gl sub-options as well. Maybe try something like mplayer -vo gl:yuv=2:rectangle=1..

    The first option should reduce CPU load by doing YUV-RGB conversion on the GPU rather than the CPU, second option should reduce memory usage by using non-power-of-two textures. Default for both of the sub-options is 0.
    Do you mean that even with GL_texture_non_power_of_two, GL_TEXTURE_2D textures may be oversized to 2^n dimensions? Would you know the actual threshold? One of the XvBA bugs tends to confirm something like a threshold to keep using 2^{n,m} textures for small textures that are not a power of two in size.

    Leave a comment:


  • gbeauche
    replied
    Originally posted by Kano View Post
    Tearfree, Evergreen support, do we expect h264 l5.1 from pcom too


    BTW, PCOM is just about presentation, i.e. displaying the decoded frames onscreen. This is the 2D alternative to OpenGL. The rest is common and a subset is even shared verbatim with Windows... So core decoding capabilities are the same, be it PCOM, OpenGL or something else used to present the result. This is what I would assume though.

    Leave a comment:


  • Kano
    replied
    Tearfree, Evergreen support, do we expect h264 l5.1 from pcom too

    Leave a comment:

Working...
X