If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
Bottom line for me is after experiencing AMD "drivers" I'm going with nVidia on the desktop build, but will likely be waiting for Sandy Bridge now as it's not that far off and comes along with a new socket LGA2011. (Another good point about waiting is that usually for major API hw architecture support changes aren't so hot the first iteration of the hw, and I expect by the time Sandy bridge, new mbs, etc. are out that both nVidia and ATI will be on their 2nd generation of DX11/OGL4 supporting GPUs and since the latest cards are fairly expensive for decent specs I'll definitely be going with nVidia unless they manage another FX 5XXX cluster ----.)
nVidia & OSS: they don't need it. Their drivers usually work VERY well, and any major problems are usually quickly fixed or such has been my experience in the past.
Ubuntu 10.4: I'll be upgrading, but this time I plan to do a fresh install so I need to do some data backing up and then I'll still probably wait until June to do it. (Let others guinea pig it for a while, maybe 10.10 I'll try the betas IFF we don't need another early release driver for it...)
45watt is bad hardware! nvidia just do not have any chip-making-skill!
is TDP320 watt for the gtx480 a good chip? most of the time only 5% faster than a 5870 TDP220watt!
100watt for only 5%....... WOW nvidia you are the best ever!
buying nvidia is like killing trees or mybe kids!
actually, considering that the nVidia GTX 4XX series GPUs are MUCH larger than ATI's 5XXX GPUs nVidia MUST have better hw with such a slight increase in power usage, iff power usage is any criteria for judging a GPUs architecture.
IMO they're both fairly equivalent in hw arch terms, however nVidia just beats the pants off of AMD as far as drivers and extras go. Sure Eyefinity's a nice gimmick, but one that I'm unlikely to ever use...
Sorry, one last part to the other post: I was wondering how temps in Linux. My current card, as I said, 7950 GT gets about 47 to 50 degrees, give or take, idle. I think it's around the same in Windows. I haven't really compared it, lately. I don't know if this is typical for this card but it seems acceptable.
My 7600 GT would get up to 55C under FULL loads.
I'm wondering how a HD 4850 or HD 4770 would do for temps in Linux and whether there's a big change depending on whether you use FOSS driver or the FGLRX driver. So, 1) what are the temps? 2) what are the temps using a)foss driver; b)binary fglrx driver
My 4850 mobility pegs at 80C under FULL load, and generally 65 - low 70s C under most apps, which seems to approximate very well to desktop 4850 temps as seen in various reviews.
If something is not enabled by default there are usually three possible explanations :
- there are known problems which are sufficiently serious to keep it disabled for now
- there are no known serious problems but the devs don't feel there is sufficient user/test coverage to enable by default yet
- the feature is ready to go but nobody got around to enabling it by default (the devs normally have the WIP features enabled on their systems anyways so it's not actually obvious when a default needs to change)
Hopefully next time when there are new legacy cards these work directly not 1+ years after last fglrx driver for em.
Damn... When that happens there will be one less thing for you to farm post count with pointless whining Tough...
But fear not! There might still be hope in case video acceleration still isn't good enough for your taste! Actually I've written a letter for bridgman to use (licensed under Creative Commons so I won't sue anyone):
PRIORITY : CRITICAL
To whom it may concern,
I hereby pledge the fglrx development team to stop focusing its bug tracking efforts on any topic that might impact our paying business customers, to instead focus on Xv video output - because you know, it's the only thing that counts - for the sake of one guy on the Phoronix forums who almost prouds himself for not having shelled out the money for the only ATI card he owns.
Sarcasm aside, but right before I stop being polite, how much thought do you put in your posts here? Do you SERIOUSLY think you help push things forward with that attitude?
The first apps directly show glsl errors. specviewperf is just an extra, but game speed should be compared with all features set to max. When a highend cards with oss is slower than a lowend with fglrx then something is wrong.