A Fresh Look At The Nouveau Gallium3D Performance
Phoronix: A Fresh Look At The Nouveau Gallium3D Performance
Last week we provided a fresh look at the AMD Radeon Gallium3D performance using the latest development code for the Linux 3.0 kernel and Mesa 7.11 library. Today we are now looking at the Gallium3D driver performance of the Nouveau driver that is reverse-engineered to support NVIDIA GeForce graphics processors.
How about testing GL video performance?
Michael, have you consired adding a video playback benchmark to that set of tests? OpenGL video playback is yet another area where the OSS drivers could use improvement, so that might make a good test subject.
I think a drop of 15% for linux 3.0 is not that moderate. Does someone know the reason?
I agree with the above post, it would be good to have such information.
Originally Posted by dargllun
video playback is pretty subjective and its not like opengl video playback accelelerates anything other than putting the image on the screen. Not the actual decoding.
Though I like to see these results, I'm quite sad and disappointed that Nvidia doesn't open at all their specs.
I would also love to help as a student in Software engineer, I might be able to help, but the documentation to get involved in http://nouveau.freedesktop.org/wiki/ or any link content in it, is not that helpful. (But I know you guys don't have that much time to do it, so don't feel any criticism in this comment)
I don't have time to spend talking around IRC and so on, I do really need an offline documentation that I would be able to read and learn from it. Maybe more devs are like me. Hope you guys listen to this
But anyway thanks for the good work. /me like not to be forced to configure my resolutions when switching from internal to external screen.
There is actually some written documentation in envytools. But you can't entirely avoid IRC since either the doc is incomplete and/or you won't get many of the gpu concepts by only reading the docs. Also nobody is documenting source work as this would likely double or tripple the effort needed to get something working. SO either you have to understand what the code is doing from looking at it or you have to ask in IRC to get used to it.
Originally Posted by dl.zerocool
It appears to be not that simple, as the MythTV folks tried to explain to me the other day. Besides, putting the image on the screen is an activity which may or may not be accelerated, just think scaling, DMA transfers, colorspace conversion.
Originally Posted by cb88
I agree this issue is subjective, but then what better way is having an objective benchmark, say, some set of standardized MPEG-2, MPEG-4 etc videos, rendering via a standardized player (eg. mplayer -vo gl) and then compare frame rate accuracy and CPU load. That would make a hell of a lot of sense to me.
There two parts to video acceleration (well, you could break it down further, but for he sake of argument lets say two): decode and rendering.
Decode take the compressed video stream converts it into YUV (usually) frames for display. iDCT and MC are the major stages in decode. XvMC and VA-API implement decode.
Rendering takes the decoded YUV frames and displays them. Scaling and colorspace conversion are the major stages in rendering. Xv and most OpenGL video implementations implement rendering.
Thank you very much for that explanation, as I said, I'm very busy. My exams are just over.
Originally Posted by Lynxeye
I would really like to help, I bet it's a hell of an experience behind the nouveau driver.
Sadly. It's as difficult as it is interesting to work on nouveau driver.
Anyway, I'll be following nouveau driver progress closely.
Right now that's all I can do, and I am really disappointed about it, I wish I could really do more. But I do really understand it's difficult for the devs to keep a documentation up to date and usable.
It's not their fault though, I really like nvidia. I did some awesome algorithms using CUDA. (yes, cuda is really powerful, much more than Opencl it's not comparable).
I might be a dreamer, but Hopefully one day, Nvidia will become Open Source friendly.
This day, things will be so different that world will bond in respect. But well... That's not the TOP next priority