Originally posted by sreyan
View Post
Announcement
Collapse
No announcement yet.
Benchmarks Of Nouveau's Gallium3D Driver
Collapse
X
-
Originally posted by AdamW View PostThat's a known bug in the original F12 X.org, but has been fixed in updates for ages. Do your updates and reboot, if you haven't already.
I'm not a gamer so I don't care too much about 3D. Nouveau is mostly feature complete for my needs except the 8600GTS is too loud.
The only reason I use the nvidia blob is the dual head support. I've tried the F12 live cd (to try out the nouveau goodness) which detects my monitors perfectly from EDID data but the fan noise is too much. When the nouveau driver can downclock the card / reduce the fan speed I'll ditch the blob forever! I can't wait
Comment
-
Originally posted by sreyan View PostThis machine is running Lucid so it probably has the same X org that F12 had /way/ back Maybe I'll switch to F13 when it comes out!
I'm not a gamer so I don't care too much about 3D. Nouveau is mostly feature complete for my needs except the 8600GTS is too loud.
The only reason I use the nvidia blob is the dual head support. I've tried the F12 live cd (to try out the nouveau goodness) which detects my monitors perfectly from EDID data but the fan noise is too much. When the nouveau driver can downclock the card / reduce the fan speed I'll ditch the blob forever! I can't wait
Comment
-
Originally posted by rohcQaH View Postno, it's because nvidia spent lots of time/money on driver optimization and nouveau didn't. nouveau can become faster, but that can't be done by just switching out an algorithm. It needs work, lots of it.
If only driver development was easy, everyone would rejoice
Has the art of algorithm identification ever been implemented in software, in any commercial and/or GPL software?
.
Comment
-
Originally posted by sabriah View PostIs there a way to automagically identify algorithm choices? It would be nice addition to Valgrind (http://valgrind.org/), which if I understand it correctly mainly identify memory usage, memory leaks etc.
Has the art of algorithm identification ever been implemented in software, in any commercial and/or GPL software?
.
For compiled code you could always take the route of compiling to some IR representation and using heuristics to optimize better. Really advanced optimization plugins for LLVM would be great
Both of these approaches require more information. Without an AST / simple IR available during compilation or the rich metadata that's in .net's IL it would be very tough to do. Doing something like this in valgrind would be annoying to implement and probably offer very little gains.
Comment
-
Originally posted by Hephasteus View PostDo it yourself. Use the Nbitor to set the default frequencies down a tad and then drop voltage. Not every card has voltage modification or some are technically hard to do through Bios but some are easy. Then just nvflash it. Fan controls are a snap on many of them. Set it at 60 or 70 percent speed and if you can drop .1 volts should run cooler. You lose a few gpixels per sec and few gtexels per sec from downclocking the core but some will let you keep super high 2d clocks and downclock 3d only.
Comment
-
Originally posted by sreyan View PostThis machine is running Lucid so it probably has the same X org that F12 had /way/ back Maybe I'll switch to F13 when it comes out!
I'm not a gamer so I don't care too much about 3D. Nouveau is mostly feature complete for my needs except the 8600GTS is too loud.
The only reason I use the nvidia blob is the dual head support. I've tried the F12 live cd (to try out the nouveau goodness) which detects my monitors perfectly from EDID data but the fan noise is too much. When the nouveau driver can downclock the card / reduce the fan speed I'll ditch the blob forever! I can't wait
Comment
-
Originally posted by sreyan View PostThat's far too much effort. If I can't do it as easily as installing the blob with yum or apt, it's not going to happen.
But I guess the engineers know best which is why they love to put hardware decoding in video cards that eat 25 or more watts when my cpu at 100 mhz declock and .125 devolt can decode a 1080i video at 50 to 60 percent cpu usage on about 12 to 15 watts.
Comment
-
I see the benchmarks and I have no issue with them. The nVidia drivers are superior. However, the Nouveau drivers are usable. But, and this is my gripe. The Phoronix graphs just don't show that, as it uses the same arbitrary y-axes as ever.
The graphs in Phoronix have used the same incomprehensible y-scales for years, with "tickmarks" based on multiples of 13, 17, 18, 20, 21, 25, or whatever what fitted the standardized graph height, apparently in pixels. Please, if you want to make the y-axes usable, please, use regular intervals like 1,2,3,4; 5,10,15,20; or 20,40,60, etc. Or, as in this case, when the values differ greatly, please, use log_2, log_10, or log_whatever in order to show us what the differences are!
Now, the Nouveau-drivers are crammed at the bottom, near the perhaps 15, 25, or 35 fps line. These fps numbers are relevant, and they are playable. Using a log scale, one would be able to interpret them readily. Using a log_2 scale may be superior in many more cases than using the log_10 scale.
Please, change the scales of the Phoronix graphs to something interpretable! This is not the first request along this line here at Phoronix, and it is the 2nd request from me. Please, or at least explain why you value cosmetic consistency over accuracy. I just don't understand it.
.
Comment
-
Originally posted by Hephasteus View PostIt's kind of moot to bench this since Gallium doesn't have working TTM manager yet as far as I can tell. It's doing everything out of memory mapped frame buffer. Some news on progress for TTM would be good. I can't find any.
so TTM is likely already working. but there's a lot of things to optimize and probably a lot of stuff still goes through SoftPipe SW Rendering still that isn't hooked in yet to HW accelerated routines i'd bet.
Comment
Comment