Originally posted by Rahux
View Post
Announcement
Collapse
No announcement yet.
nVidia likely to remain accelerated video king?
Collapse
X
-
-
Originally posted by Rahux View PostHey guys - in the next few months I'll be looking into a new PC and my main priority is being able to watch 1080p movies and do a bit of gaming (but video is more important).
I invested in a nice large monitor as my new place will not have a TV. As far as I can see, ATI cards are performing much better overall but nVidia is the only one offering accelerated video (both local and flash).
Is this likely to remain the case in the next 6 months? Also I hear that the current crop of nVidia cards are very noisy - would I be getting nice video play at the expense of being able to hear the movies I watch? What's the outlook on blue-ray in linux too? Is the card likely to make a difference there?
I guess the question is really whether waiting will be worthwhile.
I did a test just messing around, installed fedora 13 on an old slightly overclocked t-birt cpu running at 1.2 ghz. The poor thing was always getting beat up always working at max or near max duty cycle always sucking 50 watts of juice. Now a p4 running at 2.8 ghz barely sweats. Just updating was a chore on the t-bird becuase it would hammer out the dependency code with python for hours. You can't watch hd video with it because the OS is using 15 to 35 percent of the cpu anyway. Now the 2.8 ghz p4 can run 2 1080p videos easily with the os only needing 3 to 5 percent of it's attention. It can suck up 80 watts of juice but it's hard to put it into that kind of duty cycle so it ends up being more efficient than the t-bird.
Now compare that to a modern AMD e series cpu built on the low power process. You can devolt and declock an athlon 250e down to 25 watt max thermal envelope and it can show you 2 1080p videos and a 720p video all at once before it starts getting over its head. You can match that with a 5450 gpu on efficiency but anything else is going to lose and lose hard.
So in conclusion. I don't care about gpu accelerated video drivers and nobody can really make me care. The nvidia drivers are almost keeping up with ATI drivers just because Ben Skeggs is that good. But it won't last. I'm not going to tell you you'll end up in any sort of untennable horrible nightmare situation if you use nvidia. But you'll make things easier on yourself if you go ATI.
Sorry if it offends the HTP crowd but really it's retarded. They should only impliment the latest purevideo and ATI decoding and they shouldn't even work on it till they've run out of things to do. Even better wait till both of them finish up these half hearted implimentations and do it once it's done. But the people who contribute most to linux will likely futz with it just so they can sell stupid tablets or netbooks with arm cpu's.
Comment
-
If you can show me one, just one, example of CPU decoding looking anywhere near as good visually, or better than hardware GPU decoding I'll agree.
You can't. All of the CPU driven decoders have to make compromises in decoding and post process quality to match GPU decode speeds. It's all about parallelisation and off the shelf CPUs just aren't built that way. GPUs are.
The problem for you is that you're missing the point of GPU decoding when it comes to a decent lappy or desktop. It is about clock cycles, but it's also about VISUAL quality and no CPU driven decoder ever matches what a proper GPU decoder can do in terms of frame smoothing or colour grading.
Comment
-
Originally posted by Hephasteus View PostNow the 2.8 ghz p4 can run 2 1080p videos easily with the os only needing 3 to 5 percent of it's attention.
1080p video at 24-30 frames per second
CPU - 2.8 GHz or faster Intel Pentium 4 or equivalent AMD processor
RAM - At least 1GB of RAM
GPU - 256MB or greater video card
And that decoder sacrifices a lot over a hardware accelerated solution. And that is for one stream.
Comment
-
AFAIK most of the visual differences would be related to filtering & post processing, and that tends to be done on shaders anyways when dedicated GPU hardware is used for decoding. What makes this discussion complicated is that there aren't many (any ?) implementations where just the shader-unfriendly part of decode is done on CPU and the remaining (filtering, post processing etc..) is done on the GPU.
In other words, there tend to be two commonly used code paths :
- everything on the GPU (decode using dedicated HW, filter & post proc using shaders, presentation (CSC, scaling etc..) using shaders
- mostly on the CPU (decode using CPU, filter & post proc using CPU, presentation (CSC, scaling etc.) using shaders via Xv or GL
Once the devs have time to look at pushing shader processing a *bit* further up the pipe than it is today I think this will become a lot more obvious. Or I'll learn something new. WhateverTest signature
Comment
-
Originally posted by bridgman View PostAFAIK most of the visual differences would be related to filtering & post processing, and that tends to be done on shaders anyways when dedicated GPU hardware is used for decoding. What makes this discussion complicated is that there aren't many (any ?) implementations where just the shader-unfriendly part of decode is done on CPU and the remaining (filtering, post processing etc..) is done on the GPU.
In other words, there tend to be two commonly used code paths :
- everything on the GPU (decode using dedicated HW, filter & post proc using shaders, presentation (CSC, scaling etc..) using shaders
- mostly on the CPU (decode using CPU, filter & post proc using CPU, presentation (CSC, scaling etc.) using shaders via Xv or GL
Once the devs have time to look at pushing shader processing a *bit* further up the pipe than it is today I think this will become a lot more obvious. Or I'll learn something new. Whatever
Yes, it's true that shaders can and have in many cases been used by certain implementations to do the scaling, grading, etc etc but in terms of the Nvidia as Sigma solutions there's a lot more happening the processing of the decoded video BEFORE it reaches the output and post process stages.
HD video in its variant forms has flags which set the keyframe(s) count, the motion styles (panning, fast motion, talking head etc) and the image quality profile(s) in effect and so on a full hardware decoder which does the proper full implementation of these flags and their management the image quality, smoothness of animation and quality of the transitions well overall just better than any CPU decoder can do.
Shaders will be able to much of that but only on higher midrange or high end GPUs (or Larrabee if it ever reaches the world) due to the complexity of those many parallel processes going off at the same time to enable smooth video playback. As it's much cheaper at this stage to just keep adding the dedicated decoder (same hardware/firmware gets cheaper with each revision) and not eat into shaders themselves the idea of putting that stuff entirely into a software implementation via shadercode seems something more appropriate for experimenters in the future rather than right now.
Comment
-
Originally posted by IsawSparks View PostSome of that is true and some isn't.
Originally posted by IsawSparks View PostYes, it's true that shaders can and have in many cases been used by certain implementations to do the scaling, grading, etc etc
Originally posted by IsawSparks View Postbut in terms of the Nvidia as Sigma solutions there's a lot more happening the processing of the decoded video BEFORE it reaches the output and post process stages.
Originally posted by IsawSparks View PostHD video in its variant forms has flags which set the keyframe(s) count, the motion styles (panning, fast motion, talking head etc) and the image quality profile(s) in effect and so on a full hardware decoder which does the proper full implementation of these flags and their management the image quality, smoothness of animation and quality of the transitions well overall just better than any CPU decoder can do.
Originally posted by IsawSparks View PostShaders will be able to much of that but only on higher midrange or high end GPUs (or Larrabee if it ever reaches the world) due to the complexity of those many parallel processes going off at the same time to enable smooth video playback. As it's much cheaper at this stage to just keep adding the dedicated decoder (same hardware/firmware gets cheaper with each revision) and not eat into shaders themselves the idea of putting that stuff entirely into a software implementation via shadercode seems something more appropriate for experimenters in the future rather than right now.Test signature
Comment
-
Originally posted by Hephasteus View PostYa. I don't want linux decoding my video with video card. My cpu can do it much easier and much more efficiently. This whole march toward gpu decoding started at a time when cpu's were 85 to 99 percent duty cycle beasts of burden. Always getting stomped on no matter what they tried. Now they do this stuff without breaking a sweat.
Originally posted by Hephasteus View PostNow the 2.8 ghz p4 can run 2 1080p videos easily with the os only needing 3 to 5 percent of it's attention.
Originally posted by Hephasteus View PostSo in conclusion. I don't care about gpu accelerated video drivers and nobody can really make me care.
Originally posted by Hephasteus View PostThe nvidia drivers are almost keeping up with ATI drivers just because Ben Skeggs is that good.
Comment
-
I remember being amused about some phoronix benchmark showing that for a particular test it would be slightly more power-efficient to decode something on the CPU and have the nvidia GPU sleep, rather than the other way round. Probably the reason was that the power management of modern CPUs is much, much more evolved than buggy nvidia powermizer crap.
Agreed, it probably shouldn't be that way..
Comment
-
Originally posted by Hephasteus View PostYa. I don't want linux decoding my video with video card. My cpu can do it much easier and much more efficiently. This whole march toward gpu decoding started at a time when cpu's were 85 to 99 percent duty cycle beasts of burden. Always getting stomped on no matter what they tried. Now they do this stuff without breaking a sweat.
I did a test just messing around, installed fedora 13 on an old slightly overclocked t-birt cpu running at 1.2 ghz. The poor thing was always getting beat up always working at max or near max duty cycle always sucking 50 watts of juice. Now a p4 running at 2.8 ghz barely sweats. Just updating was a chore on the t-bird becuase it would hammer out the dependency code with python for hours. You can't watch hd video with it because the OS is using 15 to 35 percent of the cpu anyway. Now the 2.8 ghz p4 can run 2 1080p videos easily with the os only needing 3 to 5 percent of it's attention. It can suck up 80 watts of juice but it's hard to put it into that kind of duty cycle so it ends up being more efficient than the t-bird.
If you doubt what I am saying, fire up mplayer and play back the video file with the "-vo xv" option, which eschews all GPU video decode acceleration. I guarantee you you'll see well over 3-5% utilization.
Now compare that to a modern AMD e series cpu built on the low power process. You can devolt and declock an athlon 250e down to 25 watt max thermal envelope and it can show you 2 1080p videos and a 720p video all at once before it starts getting over its head. You can match that with a 5450 gpu on efficiency but anything else is going to lose and lose hard.
So in conclusion. I don't care about gpu accelerated video drivers and nobody can really make me care. The nvidia drivers are almost keeping up with ATI drivers just because Ben Skeggs is that good. But it won't last. I'm not going to tell you you'll end up in any sort of untennable horrible nightmare situation if you use nvidia. But you'll make things easier on yourself if you go ATI.
Sorry if it offends the HTP crowd but really it's retarded. They should only impliment the latest purevideo and ATI decoding and they shouldn't even work on it till they've run out of things to do. Even better wait till both of them finish up these half hearted implimentations and do it once it's done. But the people who contribute most to linux will likely futz with it just so they can sell stupid tablets or netbooks with arm cpu's.
Comment
Comment