Announcement

Collapse
No announcement yet.

Will AMD's XvBA Beat Out NVIDIA's VDPAU?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by smitty3268 View Post
    What makes you think the XvBA is any different? I doubt AMD is going to stop anyone from re-implementing the API in other drivers. It's the binary acceleration part that comes with their drivers that you can't simply copy. NVidia is the same, you can re-implement the API if you want, but you can't just copy the code out of their binary drivers. In other words, the Nouvou project is free to implement the VDPAU API by using shaders to accelerate it, but they can't use the dedicated hardware block on the cards, because that's only available with the NVidia binary drivers.
    Sure you can use shaders if you wish with vdpau or you can use a chips dedicated hardware, either or can be done so why the hell do we need yet another API that hasn't even seen the light of day yet or one that has hung in utter limbo for the last few years. Vdpau is the most widely accepted API at this time with the most development happening on it, why dupicate efforts when it is not needed?

    (BTW: IIRC ATI's solution uses a shader based solution (at least partially) as it is.)

    What are you talking about? Of course I've heard of ION, and of course VDPAU works with them - I'm just saying that the performance numbers given in this test are more impressive for the Intel parts when you consider that the CPU usage is about the same with them on slower CPU's. Maybe the NVidia numbers would be similar, we really can't tell (and you can't compare to another review, because they use completely different clips/testing methods).
    When guys are running things like the killa sampla on a morgan core's and katmai with less then 5% cpu usage you can guarantee that a atom based system wouldn't hiccup. Even then more cpu usage is probably used accessing the disk and processing the audio and other background tasks. One thing the tests also did not reveal is if cpu's stayed in their lowest power state or not, if any frames were dropped. Without a timeline graph the results or at least a frame count and played frame count the results are very incomplete.

    Comment


    • #22
      Originally posted by deanjo View Post
      Sure you can use shaders if you wish with vdpau or you can use a chips dedicated hardware, either or can be done so why the hell do we need yet another API that hasn't even seen the light of day yet or one that has hung in utter limbo for the last few years. Vdpau is the most widely accepted API at this time with the most development happening on it, why dupicate efforts when it is not needed?

      (BTW: IIRC ATI's solution uses a shader based solution (at least partially) as it is.)



      When guys are running things like the killa sampla on a morgan core's and katmai with less then 5% cpu usage you can guarantee that a atom based system wouldn't hiccup. Even then more cpu usage is probably used accessing the disk and processing the audio and other background tasks. One thing the tests also did not reveal is if cpu's stayed in their lowest power state or not, if any frames were dropped. Without a timeline graph the results or at least a frame count and played frame count the results are very incomplete.
      Well, I think we finally agree. All the tests really showed was that it's possible to get good acceleration on all 3 cards, but with no way of comparing them or mentioning quality, etc.

      I think some of what you say about XvBA could actually be said of VDPAU when it came out - why didn't they just use the existing VA-API stuff? But it seems silly to go from 2 API's up to 3 when it isn't needed.

      I think there's a little confusion going on here about what VDPAU actually is. It's an API, which can have any backend implementation you want to give it. So NVidia does it by accessing their custom hardware, and Via does it by accessing their hardware, and Nouveau can do it by using shaders. Or reverse engineering the binary drivers. Likewise AMD could do it by using shaders (open-source) or UVD2 (binary). The XvBA API would be the same type of situation for anyone who wanted to implement it - dedicated hardware for the binary drivers, and shaders or reverse-engineered code for the open source ones.

      Regarding the AMD hardware using shaders for acceleration: that's how UVD1 cards did their video acceleration, but UVD2 cards have the same type of custom decoding hardware that NVidia uses. The original r600(HD2900?) card was UVD1, but I think all the other r600 cards and definitely all the r700 cards have dedicated hardware in UVD2.

      Comment


      • #23
        Originally posted by smitty3268 View Post
        What makes you think the XvBA is any different? I doubt AMD is going to stop anyone from re-implementing the API in other drivers. It's the binary acceleration part that comes with their drivers that you can't simply copy. NVidia is the same, you can re-implement the API if you want, but you can't just copy the code out of their binary drivers. In other words, the Nouvou project is free to implement the VDPAU API by using shaders to accelerate it, but they can't use the dedicated hardware block on the cards, because that's only available with the NVidia binary drivers.
        Arg. If before, I were a sad panda, I am now a morose raccoon.

        XvBA can't be implemented by us, because we have no idea what it looks like! Also, the only reason we don't know how to use the video blocks, is because nobody has stepped up and reverse-engineered them; there's no magical keys that can lock us out of the HW.

        And no, I don't think anybody on the open-source side wants to do any more split video decoding backends. Let's just do everything on Gallium and be happy with it.

        ~ C.

        Comment


        • #24
          Originally posted by LavosPhoenix View Post
          Not to mention I don't expect this to ever be supported in a reasonable time considering that fglrx can't even support the newest kernel in a reasonable timeframe.

          Maybe AMD needs to take a hint from this economy and hold off from realeasing hardware before they have decent driver support for Windows and Linux. But of course they won't, as the driver is simply free software, while the hardware is where they rape the customers (for money), when the two products should be mutually exclusive.
          if anything its the poor software support they are raping thier customers with.

          thier hardware is overall much better priced then nvidia's.

          Comment


          • #25
            Will AMD's XvBA Beat Out NVIDIA's VDPAU?

            Might as well say, will tea ever taste better than coffee? Except, one can buy tea now and make it....

            When it comes to vid-decode and playback, nVidia has done the hard yards of training and has this race won. Period.

            DAMMIT/ATI-AMD have dropped the ball and scored an own goal. The troulbe is, I spent $500.00 on DAMMIT hardware, that I am deprived the use of in Linux. Suffice to say, one is not ammused! ;-)

            "...he admitted to using an xvba-video package, which is currently not publicly available." This whole issue is playing out like a farce, literally at the expense of those who purchased DAMMIT hardware.

            Counter-intuitive way to play, in a economic recession, to give the game to your main business competitor?

            For the record, Greeks do prefer coffee. :-)

            GreekGeek.

            Comment


            • #26
              The Benchmarks are really funny. Did somebody notice that he used a "Mobility Radeon HD 4870 - 550 MHz, 1 GB" together with a "Phenom 8450"? Btw. providing some random numbers do not show anything, the actual wrapper has to be freely available.

              Comment


              • #27
                Originally posted by smitty3268 View Post
                Regarding the AMD hardware using shaders for acceleration: that's how UVD1 cards did their video acceleration, but UVD2 cards have the same type of custom decoding hardware that NVidia uses. The original r600(HD2900?) card was UVD1, but I think all the other r600 cards and definitely all the r700 cards have dedicated hardware in UVD2.
                Close. The original R600 (HD2900) did not have UVD, so any decode acceleration has to be done with shaders or the legacy IDCT block carried over from earlier GPUs. The rest of the 6xx line (HD2xxx, HD3xxx) has UVD1 and a separate IDCT block for MPEG2.

                The 7xx line has UVD2 but no separate IDCT block; MPEG2 decode is done by UVD2.

                The rs780 is between UVD1 and UVD2.

                Originally posted by MostAwesomeDude View Post
                Arg. If before, I were a sad panda, I am now a morose raccoon.
                AFAIK a sad panda is generally considered to be less happy than a morose racoon. I'm glad you're feeling better
                Last edited by bridgman; 07 July 2009, 02:35 AM.
                Test signature

                Comment


                • #28
                  I don't know if it's the correct place to tell such things but I don't have a better choice at this time. If anyone can point me to a better place to tell such things, I'll be happy.

                  To be honest, I hate how ATI develops and incorporates new technologies and features to its drivers and steadily shoots themselves on the foot.

                  I didn't read all the messages but I've seen a "nVidia's headstart" argument. Unfortunately ATI had it first, literally years ago, in Radeon 8500 era. It was called VideoShaders and VideoSoap. Did anyone hear it? It's unlikely because nobody except ATI tech demos used it. It was a CPU independent video acceleration and post-processing pipeline and worked very well.

                  I never had a 8500 but had a 9600XT which was a fantastic, cool operating graphics monster and had these features built-in too. It was never marketed with videoshaders and videosoap but, it had them. 8500's and 9500's videoshader tech demos worked exceptionally well and smooth. Some programs were able to render 720P videos with it but nobody, literally nobody from ATI advertised these features even after pureVideo came to life. ATI had it first but were always shy to say so. As a result, videoshaders outdated and died without any major processing history.

                  ATI's drivers before the big cleanup (the one before the latest one) had built-in support over XV pipeline and it was like having a upscaling video processor onboard. With the Xv re-implementation, it has vaporized and my X1650's video playback has didn't come back for some months (after that my PSU blew up and I switched to nVidia with rage).

                  Currently as a nVidia user who doesn't feel at home, I eagerly wait the day that ATI will rise from its ashes so I can buy a card with better sharpness and video detail. nVidia incoprorated VDPAU which works well enough but which cannot clean videos and upscale them well. I cannot watch any DVD in my 22" screen with enjoyment.

                  Today I read that we have XvBA and has it since Q4'08 (I may have missed the first news, it was a hard Q4'08 for me). It's not official, it's not beta, it's not public, it's just an internal code chunk which developers play in their free time.

                  Let me say what I think about its future. This feature will not be picked up again because it'll be too late when it's finished. Since it'll be too late, ATI will not talk about it. Since it won't be mentioned it's API may not be published or will be in some dark corner in the internet. As a result, it won't be tested, fixed, improved and publicized and as a result many won't be able to buy an ATI card because of its linux drivers and lack of features.

                  Please don't make this feature obselete on arrival. Show your true potential guys! I really feel sad when I think that ATI cannot deliver.

                  P.S: I was an ATI user since Rage64 era. With my X1650 burning my power supply because of ATI thinking that I don't need powersaving (shutting down the parts of the processor which I don't use ATM) in Linux, I switched to an nVidia 8800GTS and I'm happy but not content and feel at home.
                  Last edited by Silent Storm; 07 July 2009, 04:02 AM.

                  Comment


                  • #29
                    I'm confused about where XvMC fits into this. Isn't it already possible to configure mplayer for XvMC using XvBA? I managed to do this myself but I can't get the XvMC output to work. Some people have got this working though. So what's the difference?

                    Comment


                    • #30
                      We can't compare API from benchmarks, an API is a specification. What matters is the actual implementation (driver, HW). I used what I have around me: old 8600 GT and a GTX 280M. Both only do partial VC-1 decoding but people won't care about VC-1 anyway.

                      If I were to classify APIs by features and/or openess, that would be in this order: VA API, VDPAU, XvBA. The former because it does handle video encode acceleration, which interests most nowadays too, among more codecs. The latter only because it's not public, but has interesting features. VDPAU is very well documented though.

                      If I were to classify APIs by their implementations, that would be in this order: VA API, VDPAU|XvBA. The latter two at the same level because both only support their respective HW. You can address all HW through VA API. There is now even an Open Source driver for Gen4 Intel GPUs (e.g. i965). Though, this only does MPEG-2 VLD at this time. Also note that the Poulsbo video driver may also be Open Source'd by Q4 2009 (when Moorestown is out?). What other vendor is doing that?

                      If I were to classify APIs by video player usage, that would be in this order: VDPAU, VA API, XvBA. BTW, if you know how to handle VDPAU through FFmpeg, you will know how to handle VA API through FFmpeg, it works the same way. The reason why VA API is no more widely used is drivers were missing. Now, there are, and people start using it. e.g. VLC.

                      As a summary, we can have VA API working on the following platforms: AMD, NVIDIA, VIA Chrome, Intel own technologies (G45) or borrowed ones from Imagination Technologies (US15W). That's 5 possible implementations, either directly or indirectly through adaptors.

                      BTW, if someone could send me a VIA Chrome card, I could write some compat code to support their older libVA drivers that are actually based on DXVA. That way, Linux distributions could trivially support all 5 kinds of HW through a single libVA library.

                      Comment

                      Working...
                      X