Announcement
Collapse
No announcement yet.
AMD Vega 8 Graphics Performance On Linux With The Ryzen 3 2200G
Collapse
X
-
Here is a youtube video that shows how much of a difference can Memory Frequency make(especially on Ryzen, considering how Infinity Fabric works): https://youtu.be/0VfB5_pI8kU?t=9m58s
-
That low VRAM set also saves power, since you don't have high CPU spikes with GTT like with VRAM... so in that way you don't lose actually, but normalise everything. You can let say play xyz game flat all time in GTT where perf is enough already, who cares about spikes? OK, there are some weird people who like to have 1000 fps when they look at they sky, but i have no idea what is a point of that They will win some benchmarks on average because of 1000 fps on the sky and then what with that?
There are two ways around that as you see: question is do someone want to win in every benchmarks every day or to save power? Now some might want to buy APU to OC everything further there should be set at as max VRAM as possible if for more intensive gaming and another wanna GE 35W model for bussines or HTPC or just to set existing one to 45W or whatever, there i would recommend also to set VRAM as low as possible... so it depends what you want, firing things up is just one side of a medal - it shouldn't break minimum fps but majorly cut high and that is where "lose" is, if we could even call that a lose
Low fps is problem, but high no one cares about... some people think the opposite by only looking at highs, but it is really opposite of that. So, overclock everything as much as possible not to push highs but to push lows, but then force it to GTT by setting low VRAM sounds most ideal to me for gaming on APULast edited by dungeon; 16 February 2018, 05:55 AM.
Leave a comment:
-
Originally posted by duby229 View PostI don't think that's true at all. At least not for gaming. I haven't seen it.
I think up to 25% should be expected to lose on average nowdays, but if its more - then something really unusual happenedLast edited by dungeon; 15 February 2018, 11:36 PM.
Leave a comment:
-
Originally posted by duby229 View Post
I think I'm misunderstanding. Isn't all VRAM on an APU system RAM? And in that case wouldn't all APU's have the same VRAM performance as system RAM? Because it is system RAM.
Leave a comment:
-
Originally posted by duby229 View PostI think I'm misunderstanding. Isn't all VRAM on an APU system RAM? And in that case wouldn't all APU's have the same VRAM performance as system RAM? Because it is system RAM.
https://developer.amd.com/wordpress/...1004_final.pdf
The hardware has changed since that white paper, but there are still two "views" of system RAM. During startup the SBIOS reserves a section of system memory (holding it back from the OS) and then the driver maps it via a HW-supported aperture and treats it as VRAM.
There are a couple of HW functions which do not go through page tables, eg the page tables themselves and the scanout buffer for display - these need to go in the "VRAM" aperture.
There used to be a big difference in performance between the two paths (maybe 2:1) because the "VRAM path" (aka Garlic) fetched data in bigger chunks and did not support coherency with CPU cache, but AFAIK these days the performance differences are much reduced and might even be zero (something to check if I ever have free time).
From a driver and application perspective there are still two different memory pools, one for system memory and another for (emulated) VRAM.Last edited by bridgman; 15 February 2018, 06:55 PM.
Leave a comment:
-
These tests pretty much line up with the sites that test Windows exclusively. It will work well on most games except those that run @ 1080p. Some games will go OK at 1080p and low settings but it won't always be satisfactory.
Leave a comment:
-
Originally posted by bridgman View Post
I don't believe anyone has identified issues with TTM itself related to managing system memory.
What they might be talking about is tuning the migration heuristics across the stack for the case where (a) simulated VRAM and system RAM have fairly similar performance rather than the more common ~5:1 performance difference and (b) the carved-out system memory used for emulating VRAM is smaller than the natural VRAM footprint of the application.
The Windows stack has probably seen more tuning effort to avoid excessive migration in that scenario since MS was requiring an extremely small carve-out for a while, something like 32-80MB IIRC. I know there has been some comparable work on the Linux stack but probably not as much.
Leave a comment:
-
Originally posted by duby229 View PostI'm sure you're right, but that's not what I read. I've read that TTM has some serious design flaws for dealing with system RAM as graphics RAM. And that's a problem for integrated graphics. It's not that TTM can't do it exactly, just that it's not the right design for it.
What they might be talking about is tuning the migration heuristics across the stack for the case where (a) simulated VRAM and system RAM have fairly similar performance rather than the more common ~5:1 performance difference and (b) the carved-out system memory used for emulating VRAM is smaller than the natural VRAM footprint of the application.
The Windows stack has probably seen more tuning effort to avoid excessive migration in that scenario since MS was requiring an extremely small carve-out for a while, something like 32-80MB IIRC. I know there has been some comparable work on the Linux stack but probably not as much.Last edited by bridgman; 15 February 2018, 02:17 PM.
Leave a comment:
Leave a comment: