Announcement

Collapse
No announcement yet.

A NVIDIA VDPAU Back-End For Intel's VA-API

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Nechushtan
    replied
    so does this mean I'll be able to play back h264 1080p using the Intel 4500M GPU in linux in the near immediate future? or should I just stick with nvidia GPU's?

    Leave a comment:


  • Kano
    replied
    Well unoffical specs for XvBA do not really help to get direct support. The wrapper is no bad idea, comes a bit late maybe, because vdpau implementations are already done and getting better so this wrapper is primary for fglrx. Using vdpau it is possible to have osd and subtitles displayed already (with mplayer+xinelib). I guess lots of work is done in the finetuning, so when the Intel implementaion works just a tiny bit differnet to the wrapper things can get worse than direct support. Did somebody try it with ati?

    Leave a comment:


  • bash
    replied
    Originally posted by korpenkraxar View Post
    Aha, ok I see. My bad. So what is the difference from the user perspective provided that the back-end implementation works?
    AFAIK it would mean that with the Galluim3D/Mesa based drivers and the open-source Intel and ATI drivers directly supporting VA-API and a sort of wrapper/layer/translator for fglrx and nvidia you would just need to implement VA-API support in your player to GPU enabled decoding on all (major) cards.

    Maybe someone who has a fancy tag like AMD Linux Guy or X.org Dev could confirm is this might be a/the possible outcome.

    Leave a comment:


  • curaga
    replied
    More player support.

    Leave a comment:


  • korpenkraxar
    replied
    Originally posted by Zhick View Post
    You got that totaly wrong. ATi (fglrx) only provides XvBA, nVidia (nvidia) only provides VDPAU and the FOSS-drivers (at some point will) provide only VA-API.
    Aha, ok I see. My bad. So what is the difference from the user perspective provided that the back-end implementation works?

    Leave a comment:


  • Zhick
    replied
    Originally posted by korpenkraxar View Post
    Do I understand it correctly that Nvidia is now actively developing and providing support for all three HD video acceleration interfaces (VDPAU, VA-API & XvBA?), even their competitor's, whereas the fglrx team is mainly farting around trying to fix one more bug than they introduce in each release?
    You got that totaly wrong. ATi (fglrx) only provides XvBA, nVidia (nvidia) only provides VDPAU and the FOSS-drivers (at some point will) provide only VA-API.

    This is realy just as bridgman said a developer (who probably isn't involved with any of these) who wrote/will write a backend that translates VA-API-calls into VDPAU-/XvBA-calls.

    Leave a comment:


  • bridgman
    replied
    Don't think so. This is a third party developer layering VA-API over other video APIs so their higher level code only needs to support a single API. Nothing to do with NVidia.

    Leave a comment:


  • korpenkraxar
    replied
    Do I understand it correctly that Nvidia is now actively developing and providing support for all three HD video acceleration interfaces (VDPAU, VA-API & XvBA?), even their competitor's, whereas the fglrx team is mainly farting around trying to fix one more bug than they introduce in each release?

    bridgman's support and involvement around here and the potential of the Free radeon drivers are really the only reasons why a sane person should at all consider getting an ATI card, or perhaps one needs to be a little insane to go ATI at this point by the looks of it. I dunno anymore.

    Leave a comment:


  • unix_epoch
    replied
    128MB isn't really that much when dealing with all of the reference frames you need to maintain for some H264 streams. For example, an unrestricted (i.e. not Level 4.1) 1920x1088 stream with 15 reference frames would use over 60MB just for the reference surfaces (and that's assuming 4:2:2 YUV surfaces -- if they are stored in RGB, it's over 90MB).

    What I'm wondering is if VDPAU and/or VA-API support accelerating discrete steps in the video decoding process (rather than full bitstream processing), so that they can be adapted to accelerate Theora and VC-1 on non-VC-1-capable GPUs (plus even my 3GHz dual core can't play a 1080p Theora stream while it can play most 1080p H264 streams).

    Leave a comment:


  • Kano
    replied
    Maybe compiz need lots of vram too the same time.

    Leave a comment:

Working...
X