Announcement

Collapse
No announcement yet.

nVidia likely to remain accelerated video king?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by IsawSparks View Post
    The former, which no CPU HD decoder does in Linux (and only a few do in Windows with the right playback app, Elecard is one I think that does).
    Interesting - I guess the quarter-pel interpolation is probably one of the more expensive operations in the decoder chain, but it seemed like the kind of filter kernel that should fit reasonably well onto SSE instructions.

    It should be a pretty good fit on shaders anyways, particularly if some dev with a brain the size of a watermelon can figure out a good way to leverage the built-in texture filtering hardware as well as the shader core (eg the "bicubic with a helper texture" implementation in the radeon Xv path that I haven't gotten around to understanding yet
    Test signature

    Comment


    • Originally posted by Panix View Post
      These discussions on going on quite long. A video card has VIDEO ACCELERATION AND DECODING CAPABILITIES. Why is everyone arguing about whether a cpu can do it or not? It's really stupid. People do things on their computer that requires CPU PROCESSING POWER. They might want to do things while the video is being decoded or whatever. The video card should be doing these chores. They are capable of it. If drivers don't allow this, then this should be worked on.
      Panix, you might be missing the point of the discussion. AFAIK it's not about whether hardware decode acceleration should be worked on, it's about how to trade off the impact of not having hardware decode acceleration *today* relative to other factors. Nobody is saying "gee, CPU decoding is good enough so there's no need to work on anything more", just "CPU decoding works for me today".
      Test signature

      Comment


      • Originally posted by bridgman View Post
        Interesting - I guess the quarter-pel interpolation is probably one of the more expensive operations in the decoder chain, but it seemed like the kind of filter kernel that should fit reasonably well onto SSE instructions.

        It should be a pretty good fit on shaders anyways, particularly if some dev with a brain the size of a watermelon can figure out a good way to leverage the built-in texture filtering hardware as well as the shader core (eg the "bicubic with a helper texture" implementation in the radeon Xv path that I haven't gotten around to understanding yet
        Well that's more of driver thing, I would think. Much like NV's sharpen texture feature. The problem with a CPU driven decoder is that it needs to be able to hand off some level of interaction with the driver and as yet HD content is being treated as any other codec in Linux media players.

        VDPAU at least does it all on the GPU so you know that the handoff to the GPU's profiling and processing is actually happening and the visual results speak for themselves.

        Comment


        • Yep. What seems to be less widely understood, however, is how much of that GPU processing is actually done on shaders rather than on dedicated decode hardware.

          Once the bitstream is handed off to a proprietary driver it's hard to see where the work is really being done and, by extension, how much of that work could be done in an open source driver even without access to hw decode programming info.

          Right now we have "everything on the GPU" stacks and "almost everything on the CPU" stacks, and relatively few examples of anything in between. Q's point about r600 in another thread (r600 is supported by the latest proprietary drivers but does not have UVD) might turn out to be an interesting data point, not sure yet. There may be similar examples on NVidia hardware of similar vintage.
          Test signature

          Comment


          • Originally posted by mirv View Post
            2. Yes those CPUs can. I have an AMD X2 3800+ perfectly capable of it all.
            Thank you for supporting me on that, it is getting quite frustrating because I am watching my Avatar Bluray rip right now on my AMD Athlon X2 4400+. As for you Isawsparks, it isn't that I shouldn't have GPU decoding, and I do prefer it, but the fact that you are saying the quality is better than CPU decoding which just isn't true. Personally, I rip my movies in H.264 now so I can use XvBA. My reason is different than yours though. I just do so so I can multitask. When your cpu is at ~90% the computer isn't very useful. Here is my CPU usage results for playing my Avatar Bluray rip.

            AMD Athlon II X4 620 with XVBA: ~4% CPU
            AMD Athlon II X4 620 with gl: 60% CPU

            AMD Athlon X2 4400+ with XVBA: ~6% CPU
            AMD Athlon X2 4400+ with gl: 90% CPU

            The real point is, both are playable with and without GPU acceleration.

            Comment


            • Originally posted by bridgman View Post
              Panix, you might be missing the point of the discussion. AFAIK it's not about whether hardware decode acceleration should be worked on, it's about how to trade off the impact of not having hardware decode acceleration *today* relative to other factors. Nobody is saying "gee, CPU decoding is good enough so there's no need to work on anything more", just "CPU decoding works for me today".
              This perfectly explains how I feel.

              Comment


              • Hmm... When playing the Avatar Bluray rip, the GPU load stays at 22% constantly, no matter how much motion and change is in the scene. I got this using aticonfig --odgc. I tested in Saurbraten first, in a complex scene to render it ran at 84 FPS without playing Avatar with XvBA and 60 FPS with. The GPU unquestionably plays video in a more efficient manner.

                Comment


                • Originally posted by LinuxID10T View Post
                  Thank you for supporting me on that, it is getting quite frustrating because I am watching my Avatar Bluray rip right now on my AMD Athlon X2 4400+. As for you Isawsparks, it isn't that I shouldn't have GPU decoding, and I do prefer it, but the fact that you are saying the quality is better than CPU decoding which just isn't true. Personally, I rip my movies in H.264 now so I can use XvBA. My reason is different than yours though. I just do so so I can multitask. When your cpu is at ~90% the computer isn't very useful. Here is my CPU usage results for playing my Avatar Bluray rip.

                  AMD Athlon II X4 620 with XVBA: ~4% CPU
                  AMD Athlon II X4 620 with gl: 60% CPU

                  AMD Athlon X2 4400+ with XVBA: ~6% CPU
                  AMD Athlon X2 4400+ with gl: 90% CPU

                  The real point is, both are playable with and without GPU acceleration.
                  Read my discussion with Bridgman, even he agrees there's a difference and he and I both know why and it definitely is related to image processing (post processing). Your poor quality rips which are recompressed to a lower rate (which I doubt you've made yourself) probably don't implement the correct profiling to show you the difference, or you can't see the difference. Either way, it's there.

                  Comment


                  • Originally posted by IsawSparks View Post
                    Read my discussion with Bridgman, even he agrees there's a difference and he and I both know why and it definitely is related to image processing (post processing). Your poor quality rips which are recompressed to a lower rate (which I doubt you've made yourself) probably don't implement the correct profiling to show you the difference, or you can't see the difference. Either way, it's there.
                    Poor quality rips... THAT IS FUNNY!!! They aren't recompressed. They are the same size as the data on the bluray. There is no loss of quality. Just full 40 mbit AVS, sorry.

                    Comment


                    • Originally posted by IsawSparks View Post
                      Moreover those CPUs can't manage multiple streams at the same time. That's a requirement for Bluray playback.
                      Actually, neither ATI nor NVIDIA can do dual HD streams decoding. On the ATI side, they can only guarantee 1 HD (H.264) + 1 SD (MPEG-2). More could be supported but the real-time constraints may not be respected. Intel Ironlake can do 2 HD streams though.

                      Comment

                      Working...
                      X