Announcement

Collapse
No announcement yet.

AMD Radeon RX 6600 Linux Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • remmnea
    replied
    For Linux desktop computers, it's a much easier choice to make. Nvidia cards are more expensive than AMD and have the edge in performance. But using AMD guarantees superior compatibility and a choice of reliable drivers, whether open source or proprietary. The same is true for Intel, although this solution is slower.

    Leave a comment:


  • aht0
    replied
    Originally posted by Raka555 View Post
    This makes me happy about buying a Vega64 a few years ago.
    Yeah, same. Decent performance and mature drivers.

    Leave a comment:


  • mm0zct
    replied
    Originally posted by Michael View Post

    It's shown on the system table on the 2nd page
    I had to explicitly right-click and "open image in a new tab" to actually load the table in a way that I could read, without doing that I couldn't see that there was any more information than was summarised in the rest of the article. Maybe the system configuration table inset could be a clickable link to open the svg version of the table on it's own

    It's a 6GB version of the card for anyone else that cares.

    Leave a comment:


  • Raka555
    replied
    This makes me happy about buying a Vega64 a few years ago.

    Leave a comment:


  • creative
    replied
    At highest settings 1080p+smaa I am actually getting 172FPS average with a RTX3070 in Shadow Of The Tomb Raider.

    Leave a comment:


  • bridgman
    replied
    Originally posted by TemplarGR View Post
    I am not buying Bridgman's reply, sorry. Video games require relatively constant max power from the graphics card. Sure, there are variations in framerates and load, but they don't make that much of a difference. The way i see it, boost clocks are meant to cheat in benchmarks.

    Oh, and as for prewarming the gpu or running games back to back, that argument' doesn't cut it either. Modern AAA games need loading levels in order to run benchmarks, and during those times the gpu powers down and re-cools itself. Even if Michael runs those back-to-back, unless he has found a way to load next game instantly while the other game finishes the benchmark, there is still time to cool the gpu enough so it can sustain boost clocks.
    When I look at the temperature / time graphs from a variety of reviews what I see is that the cooling solution heats up slowly and cools down slowly. The GPU doesn't heat up and start throttling in a couple of seconds, and when it is running at "as hot as it gets" temperature it doesn't cool down in seconds either.

    Did you get a chance to look at the Guru3D charts around the 19 minute mark where GPU activity drops to zero for a minute or two ? The GPU temperature drops a bit because of the thermal resistance between hot spot and die/heatsink but not that much... it takes longer than that for the heat pipes and heatsink (and to a lesser extent the surrounding air) to cool down.

    I'm not sure I understand your comments about running games back to back not being sufficient - are you saying that when running benchmarks the loading time is so much greater than the run time that the chip never has a chance to heat up ? I guess that is possible, but my impression was that typical benchmarks exercised the GPU for long enough to get pretty close to max temp, as long as the system was already warm from previous runs.

    Agree that running every benchmark with a cold system would probably not accurately reflect real results, but I don't think any of the reviewers are doing that if only because it would take too much time.

    Leave a comment:


  • TemplarGR
    replied
    Originally posted by Linuxxx View Post

    Interesting, but still begs the question why advertise boost clocks at all then?

    So if adequately cooled & not crammed into a small cube because HTPC, any GPU should be able to maintain its maximum boost clock indefinitely?
    Yeah, this would have been my response as well. If they can keep constant boost clocks at all times, then why have boost clocks in the first place?

    I am not buying Bridgman's reply, sorry. Video games require relatively constant max power from the graphics card. Sure, there are variations in framerates and load, but they don't make that much of a difference. The way i see it, boost clocks are meant to cheat in benchmarks.

    Oh, and as for prewarming the gpu or running games back to back, that argument' doesn't cut it either. Modern AAA games need loading levels in order to run benchmarks, and during those times the gpu powers down and re-cools itself. Even if Michael runs those back-to-back, unless he has found a way to load next game instantly while the other game finishes the benchmark, there is still time to cool the gpu enough so it can sustain boost clocks.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by Linuxxx View Post

    Interesting, but still begs the question why advertise boost clocks at all then?

    So if adequately cooled & not crammed into a small cube because HTPC, any GPU should be able to maintain its maximum boost clock indefinitely?
    AMD used to advertise "game clocks" for their cards - not sure if they still do or not. But it was what they expected the card to be able to maintain while playing games.

    Last time I checked, they were being fairly conservative with it and most games actually ended up hitting higher speeds. It varies on the game, though, because they all stress the cards in slightly different ways, so one might stabilize at 2.3Ghz and another game might hit 2.55Ghz.
    Last edited by smitty3268; 16 October 2021, 12:19 AM.

    Leave a comment:


  • yump
    replied
    Originally posted by Linuxxx View Post

    Interesting, but still begs the question why advertise boost clocks at all then?
    I think it's because the board partners have to differentiate their products somehow, it's an easily accessible number, and there was a small period of time where "factory overclocked" cards could actually be faster by a decent margin.

    But realistically GPU buyers shouldn't be looking at anything other than performance benchmarks and thermal/acoustic tests. A GPU is a throughput machine, and clocks are an implementation detail.

    Originally posted by Linuxxx View Post
    So if adequately cooled & not crammed into a small cube because HTPC, any GPU should be able to maintain its maximum boost clock indefinitely?
    No, it will settle in at some frequency less than that, after a couple minutes or so. W1zzard over at techpowerup always provides good data on this in the later pages of his reviews.

    The boost clock is highest clock that appears in the firmware's voltage/frequency table -- the highest frequency that the DVFS governor can choose. Having a large range of frequencies available is useful because workloads are different. Some are able to keep every core in the GPU very busy and pull a ton of power (like furmark and some openCL stress tests), and some are not, perhaps beacuse they're limited by memory bandwidth, because they're poorly optimized, or just because of the nature of whatever they're calculating. If the GPU is running memory-bound code, it might be best to blitz through math and get back to waiting on memory as soon as possible, by running at a frequency that would burn up the chip in seconds if you fed it a high-power workload.

    Leave a comment:


  • ThoreauHD
    replied
    Stagflation hardware.

    Leave a comment:

Working...
X