Announcement

Collapse
No announcement yet.

nVidia likely to remain accelerated video king?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Rahux View Post
    Thanks for the well considered responses. I'll be sure not to just jump at the cheapest version of any board now :P Perhaps there really IS a difference between 'Sparkle' and 'Gigabyte' =p

    I've managed to convince myself that I should wait till graduation (December) before splurging on a new PC so hopefully by then XvBA will be better supported and/or the reference heatsinks on the Nvidia have improved.

    Out if interest - are there any particular graphics chipmakers who have a tendency to produce quieter boards? When it came to graphics cards in the past I always looked at the base chipset rather than manufacturer in the past.
    I think it all depends on what particular heatsink they decide to use on a particular board. Reference heatsinks tend to be noisy, aftermarket ones (e.g. Arctic Cooling, Zalman, etc.) tend to be better. Probably the best way to tell if a particular board is quiet or not is to look at reviews of that specific board.

    Comment


    • #12
      Originally posted by Rahux View Post
      Hey guys - in the next few months I'll be looking into a new PC and my main priority is being able to watch 1080p movies and do a bit of gaming (but video is more important).

      I invested in a nice large monitor as my new place will not have a TV. As far as I can see, ATI cards are performing much better overall but nVidia is the only one offering accelerated video (both local and flash).

      Is this likely to remain the case in the next 6 months? Also I hear that the current crop of nVidia cards are very noisy - would I be getting nice video play at the expense of being able to hear the movies I watch? What's the outlook on blue-ray in linux too? Is the card likely to make a difference there?

      I guess the question is really whether waiting will be worthwhile.
      Ya. I don't want linux decoding my video with video card. My cpu can do it much easier and much more efficiently. This whole march toward gpu decoding started at a time when cpu's were 85 to 99 percent duty cycle beasts of burden. Always getting stomped on no matter what they tried. Now they do this stuff without breaking a sweat.

      I did a test just messing around, installed fedora 13 on an old slightly overclocked t-birt cpu running at 1.2 ghz. The poor thing was always getting beat up always working at max or near max duty cycle always sucking 50 watts of juice. Now a p4 running at 2.8 ghz barely sweats. Just updating was a chore on the t-bird becuase it would hammer out the dependency code with python for hours. You can't watch hd video with it because the OS is using 15 to 35 percent of the cpu anyway. Now the 2.8 ghz p4 can run 2 1080p videos easily with the os only needing 3 to 5 percent of it's attention. It can suck up 80 watts of juice but it's hard to put it into that kind of duty cycle so it ends up being more efficient than the t-bird.

      Now compare that to a modern AMD e series cpu built on the low power process. You can devolt and declock an athlon 250e down to 25 watt max thermal envelope and it can show you 2 1080p videos and a 720p video all at once before it starts getting over its head. You can match that with a 5450 gpu on efficiency but anything else is going to lose and lose hard.

      So in conclusion. I don't care about gpu accelerated video drivers and nobody can really make me care. The nvidia drivers are almost keeping up with ATI drivers just because Ben Skeggs is that good. But it won't last. I'm not going to tell you you'll end up in any sort of untennable horrible nightmare situation if you use nvidia. But you'll make things easier on yourself if you go ATI.

      Sorry if it offends the HTP crowd but really it's retarded. They should only impliment the latest purevideo and ATI decoding and they shouldn't even work on it till they've run out of things to do. Even better wait till both of them finish up these half hearted implimentations and do it once it's done. But the people who contribute most to linux will likely futz with it just so they can sell stupid tablets or netbooks with arm cpu's.

      Comment


      • #13
        If you can show me one, just one, example of CPU decoding looking anywhere near as good visually, or better than hardware GPU decoding I'll agree.

        You can't. All of the CPU driven decoders have to make compromises in decoding and post process quality to match GPU decode speeds. It's all about parallelisation and off the shelf CPUs just aren't built that way. GPUs are.

        The problem for you is that you're missing the point of GPU decoding when it comes to a decent lappy or desktop. It is about clock cycles, but it's also about VISUAL quality and no CPU driven decoder ever matches what a proper GPU decoder can do in terms of frame smoothing or colour grading.

        Comment


        • #14
          Originally posted by Hephasteus View Post
          Now the 2.8 ghz p4 can run 2 1080p videos easily with the os only needing 3 to 5 percent of it's attention.
          I would love to see that, especially running at full res and with medium to high bitrate clips. Especially when one of the most efficient software decoders, Core AVC, recommends a minimum of :
          1080p video at 24-30 frames per second

          CPU - 2.8 GHz or faster Intel Pentium 4 or equivalent AMD processor
          RAM - At least 1GB of RAM
          GPU - 256MB or greater video card




          And that decoder sacrifices a lot over a hardware accelerated solution. And that is for one stream.

          Comment


          • #15
            AFAIK most of the visual differences would be related to filtering & post processing, and that tends to be done on shaders anyways when dedicated GPU hardware is used for decoding. What makes this discussion complicated is that there aren't many (any ?) implementations where just the shader-unfriendly part of decode is done on CPU and the remaining (filtering, post processing etc..) is done on the GPU.

            In other words, there tend to be two commonly used code paths :

            - everything on the GPU (decode using dedicated HW, filter & post proc using shaders, presentation (CSC, scaling etc..) using shaders

            - mostly on the CPU (decode using CPU, filter & post proc using CPU, presentation (CSC, scaling etc.) using shaders via Xv or GL

            Once the devs have time to look at pushing shader processing a *bit* further up the pipe than it is today I think this will become a lot more obvious. Or I'll learn something new. Whatever
            Test signature

            Comment


            • #16
              Originally posted by bridgman View Post
              AFAIK most of the visual differences would be related to filtering & post processing, and that tends to be done on shaders anyways when dedicated GPU hardware is used for decoding. What makes this discussion complicated is that there aren't many (any ?) implementations where just the shader-unfriendly part of decode is done on CPU and the remaining (filtering, post processing etc..) is done on the GPU.

              In other words, there tend to be two commonly used code paths :

              - everything on the GPU (decode using dedicated HW, filter & post proc using shaders, presentation (CSC, scaling etc..) using shaders

              - mostly on the CPU (decode using CPU, filter & post proc using CPU, presentation (CSC, scaling etc.) using shaders via Xv or GL

              Once the devs have time to look at pushing shader processing a *bit* further up the pipe than it is today I think this will become a lot more obvious. Or I'll learn something new. Whatever
              Some of that is true and some isn't.

              Yes, it's true that shaders can and have in many cases been used by certain implementations to do the scaling, grading, etc etc but in terms of the Nvidia as Sigma solutions there's a lot more happening the processing of the decoded video BEFORE it reaches the output and post process stages.

              HD video in its variant forms has flags which set the keyframe(s) count, the motion styles (panning, fast motion, talking head etc) and the image quality profile(s) in effect and so on a full hardware decoder which does the proper full implementation of these flags and their management the image quality, smoothness of animation and quality of the transitions well overall just better than any CPU decoder can do.

              Shaders will be able to much of that but only on higher midrange or high end GPUs (or Larrabee if it ever reaches the world) due to the complexity of those many parallel processes going off at the same time to enable smooth video playback. As it's much cheaper at this stage to just keep adding the dedicated decoder (same hardware/firmware gets cheaper with each revision) and not eat into shaders themselves the idea of putting that stuff entirely into a software implementation via shadercode seems something more appropriate for experimenters in the future rather than right now.

              Comment


              • #17
                Originally posted by IsawSparks View Post
                Some of that is true and some isn't.
                You're disagreeing with things I didn't say

                Originally posted by IsawSparks View Post
                Yes, it's true that shaders can and have in many cases been used by certain implementations to do the scaling, grading, etc etc
                That is very common - anything using Xv or GL output these days is probably using shaders.

                Originally posted by IsawSparks View Post
                but in terms of the Nvidia as Sigma solutions there's a lot more happening the processing of the decoded video BEFORE it reaches the output and post process stages.
                I'm not familiar with the Sigma implementation but you might be surprised how much shader processing happens in "GPU decoding" today. I'm not proposing something new there, just saying that right now there is relatively less shader processing happening with "GPU decode" implementations than with "CPU decode" implementations, and that the difference is what accounts for some of the IQ compromises you see.

                Originally posted by IsawSparks View Post
                HD video in its variant forms has flags which set the keyframe(s) count, the motion styles (panning, fast motion, talking head etc) and the image quality profile(s) in effect and so on a full hardware decoder which does the proper full implementation of these flags and their management the image quality, smoothness of animation and quality of the transitions well overall just better than any CPU decoder can do.
                I don't remember seeing the kind of "motion style" flags you mentioned in any of the specs I have looked at, but maybe I missed something. Can you provide some more information ?

                Originally posted by IsawSparks View Post
                Shaders will be able to much of that but only on higher midrange or high end GPUs (or Larrabee if it ever reaches the world) due to the complexity of those many parallel processes going off at the same time to enable smooth video playback. As it's much cheaper at this stage to just keep adding the dedicated decoder (same hardware/firmware gets cheaper with each revision) and not eat into shaders themselves the idea of putting that stuff entirely into a software implementation via shadercode seems something more appropriate for experimenters in the future rather than right now.
                Yep, motion comp in particular can be demanding on the shaders (and that is *not* normally done on shaders today when using a dedicated HW decode block), but by the same token the amount of shader power in even the lowest end GPUs goes up every year. I do have doubts about whether a typical IGP would have enough shader power, for example, but I suspect that even the lower midrange parts would suffice.
                Test signature

                Comment


                • #18
                  Originally posted by Hephasteus View Post
                  Ya. I don't want linux decoding my video with video card. My cpu can do it much easier and much more efficiently. This whole march toward gpu decoding started at a time when cpu's were 85 to 99 percent duty cycle beasts of burden. Always getting stomped on no matter what they tried. Now they do this stuff without breaking a sweat.
                  Absolute BS. It doesn't do it "more efficiently" because it was never designed to do it in the first place. CPUs are designed for generic math, the video hardware on a GPU is designed -specifically- for video decoding. Its extremely efficient.

                  Originally posted by Hephasteus View Post
                  Now the 2.8 ghz p4 can run 2 1080p videos easily with the os only needing 3 to 5 percent of it's attention.
                  Also BS. I had a P4 system, and it could barely do a single 720p stream. It would chop the crap to all hell. Not to mention you're not really thoroughly testing this because the bus bandwidth just isn't there.

                  Originally posted by Hephasteus View Post
                  So in conclusion. I don't care about gpu accelerated video drivers and nobody can really make me care.
                  Good for you. Stay in ATI land, you may never have to care about it!

                  Originally posted by Hephasteus View Post
                  The nvidia drivers are almost keeping up with ATI drivers just because Ben Skeggs is that good.
                  Now where did you come up with this?

                  Comment


                  • #19
                    I remember being amused about some phoronix benchmark showing that for a particular test it would be slightly more power-efficient to decode something on the CPU and have the nvidia GPU sleep, rather than the other way round. Probably the reason was that the power management of modern CPUs is much, much more evolved than buggy nvidia powermizer crap.

                    Agreed, it probably shouldn't be that way..

                    Comment


                    • #20
                      Originally posted by Hephasteus View Post
                      Ya. I don't want linux decoding my video with video card. My cpu can do it much easier and much more efficiently. This whole march toward gpu decoding started at a time when cpu's were 85 to 99 percent duty cycle beasts of burden. Always getting stomped on no matter what they tried. Now they do this stuff without breaking a sweat.
                      Well then, you can always use Xvideo as a video output and let your CPU do all of the lifting if you want to ignore any built-in video decoding in the GPU. Or if you really want your GPU to do nothing, use the X11 video output instead of Xvideo so now your CPU has to do color conversion as well.

                      I did a test just messing around, installed fedora 13 on an old slightly overclocked t-birt cpu running at 1.2 ghz. The poor thing was always getting beat up always working at max or near max duty cycle always sucking 50 watts of juice. Now a p4 running at 2.8 ghz barely sweats. Just updating was a chore on the t-bird becuase it would hammer out the dependency code with python for hours. You can't watch hd video with it because the OS is using 15 to 35 percent of the cpu anyway. Now the 2.8 ghz p4 can run 2 1080p videos easily with the os only needing 3 to 5 percent of it's attention. It can suck up 80 watts of juice but it's hard to put it into that kind of duty cycle so it ends up being more efficient than the t-bird.
                      The reason your 2.8 GHz P4 is able to play back two 1080p videos with 3-5% CPU utilization is because you are using GPU hardware decode assist. There is absolutely no way you are not using it with getting those CPU usage figures on that CPU. A 2.8 GHz P4 will be running at a very high load playing back one 1080p video encoded in something easy to decode like MPEG-2 and will fail spectacularly at something in a tougher-to-decode codec like H.264. That 2.8 GHz P4 is probably 2/3-3/4 again as fast as your 1.2 Tbird, if that's any indication, so the Tbird should be easily able to play back the same video files if it was really that easy to do so on the CPU.

                      If you doubt what I am saying, fire up mplayer and play back the video file with the "-vo xv" option, which eschews all GPU video decode acceleration. I guarantee you you'll see well over 3-5% utilization.


                      Now compare that to a modern AMD e series cpu built on the low power process. You can devolt and declock an athlon 250e down to 25 watt max thermal envelope and it can show you 2 1080p videos and a 720p video all at once before it starts getting over its head. You can match that with a 5450 gpu on efficiency but anything else is going to lose and lose hard.

                      So in conclusion. I don't care about gpu accelerated video drivers and nobody can really make me care. The nvidia drivers are almost keeping up with ATI drivers just because Ben Skeggs is that good. But it won't last. I'm not going to tell you you'll end up in any sort of untennable horrible nightmare situation if you use nvidia. But you'll make things easier on yourself if you go ATI.
                      Not really, getting XvBA to work is currently a lot more work than getting VDPAU to work at the present. Besides, it's not that expensive to replace a budget GPU with a newer one by a different maker if your drivers suddenly start to become horrible.

                      Sorry if it offends the HTP crowd but really it's retarded. They should only impliment the latest purevideo and ATI decoding and they shouldn't even work on it till they've run out of things to do. Even better wait till both of them finish up these half hearted implimentations and do it once it's done. But the people who contribute most to linux will likely futz with it just so they can sell stupid tablets or netbooks with arm cpu's.
                      Video decode acceleration is far from useless, considering your machine uses it to do the things you are bragging about Also, video/media playback is a very common occurrence and it's nice to not need an absolutely top-line, modern CPU to do it. You can put a suitable $30 GPU in an old motherboard to make a nice HTPC and play back the latest HD video, whereas if you just had the CPU do it, you'd need an expensive new unit to get reasonable performance. I'd much rather have good video decode acceleration than having the absolute utmost in 3D performance. Just about any decent midrange or better card today can produce absolutely astronomical framerates in Linux games. My far-from-the-latest GTS250 pushes a minimum of ~60 fps in ETQW at maximum detail and AA settings at 1920x1080, god only knows what a high-end card can do.

                      Comment

                      Working...
                      X