Announcement
Collapse
No announcement yet.
AMD Radeon RX 6600 Linux Performance
Collapse
X
-
For Linux desktop computers, it's a much easier choice to make. Nvidia cards are more expensive than AMD and have the edge in performance. But using AMD guarantees superior compatibility and a choice of reliable drivers, whether open source or proprietary. The same is true for Intel, although this solution is slower.
-
Originally posted by Michael View Post
It's shown on the system table on the 2nd page
It's a 6GB version of the card for anyone else that cares.
Leave a comment:
-
At highest settings 1080p+smaa I am actually getting 172FPS average with a RTX3070 in Shadow Of The Tomb Raider.
Leave a comment:
-
Originally posted by TemplarGR View PostI am not buying Bridgman's reply, sorry. Video games require relatively constant max power from the graphics card. Sure, there are variations in framerates and load, but they don't make that much of a difference. The way i see it, boost clocks are meant to cheat in benchmarks.
Oh, and as for prewarming the gpu or running games back to back, that argument' doesn't cut it either. Modern AAA games need loading levels in order to run benchmarks, and during those times the gpu powers down and re-cools itself. Even if Michael runs those back-to-back, unless he has found a way to load next game instantly while the other game finishes the benchmark, there is still time to cool the gpu enough so it can sustain boost clocks.
Did you get a chance to look at the Guru3D charts around the 19 minute mark where GPU activity drops to zero for a minute or two ? The GPU temperature drops a bit because of the thermal resistance between hot spot and die/heatsink but not that much... it takes longer than that for the heat pipes and heatsink (and to a lesser extent the surrounding air) to cool down.
I'm not sure I understand your comments about running games back to back not being sufficient - are you saying that when running benchmarks the loading time is so much greater than the run time that the chip never has a chance to heat up ? I guess that is possible, but my impression was that typical benchmarks exercised the GPU for long enough to get pretty close to max temp, as long as the system was already warm from previous runs.
Agree that running every benchmark with a cold system would probably not accurately reflect real results, but I don't think any of the reviewers are doing that if only because it would take too much time.
- Likes 2
Leave a comment:
-
Originally posted by Linuxxx View Post
Interesting, but still begs the question why advertise boost clocks at all then?
So if adequately cooled & not crammed into a small cube because HTPC, any GPU should be able to maintain its maximum boost clock indefinitely?
I am not buying Bridgman's reply, sorry. Video games require relatively constant max power from the graphics card. Sure, there are variations in framerates and load, but they don't make that much of a difference. The way i see it, boost clocks are meant to cheat in benchmarks.
Oh, and as for prewarming the gpu or running games back to back, that argument' doesn't cut it either. Modern AAA games need loading levels in order to run benchmarks, and during those times the gpu powers down and re-cools itself. Even if Michael runs those back-to-back, unless he has found a way to load next game instantly while the other game finishes the benchmark, there is still time to cool the gpu enough so it can sustain boost clocks.
- Likes 1
Leave a comment:
-
Originally posted by Linuxxx View Post
Interesting, but still begs the question why advertise boost clocks at all then?
So if adequately cooled & not crammed into a small cube because HTPC, any GPU should be able to maintain its maximum boost clock indefinitely?
Last time I checked, they were being fairly conservative with it and most games actually ended up hitting higher speeds. It varies on the game, though, because they all stress the cards in slightly different ways, so one might stabilize at 2.3Ghz and another game might hit 2.55Ghz.Last edited by smitty3268; 16 October 2021, 12:19 AM.
- Likes 1
Leave a comment:
-
Originally posted by Linuxxx View Post
Interesting, but still begs the question why advertise boost clocks at all then?
But realistically GPU buyers shouldn't be looking at anything other than performance benchmarks and thermal/acoustic tests. A GPU is a throughput machine, and clocks are an implementation detail.
Originally posted by Linuxxx View PostSo if adequately cooled & not crammed into a small cube because HTPC, any GPU should be able to maintain its maximum boost clock indefinitely?
The boost clock is highest clock that appears in the firmware's voltage/frequency table -- the highest frequency that the DVFS governor can choose. Having a large range of frequencies available is useful because workloads are different. Some are able to keep every core in the GPU very busy and pull a ton of power (like furmark and some openCL stress tests), and some are not, perhaps beacuse they're limited by memory bandwidth, because they're poorly optimized, or just because of the nature of whatever they're calculating. If the GPU is running memory-bound code, it might be best to blitz through math and get back to waiting on memory as soon as possible, by running at a frequency that would burn up the chip in seconds if you fed it a high-power workload.
- Likes 1
Leave a comment:
Leave a comment: