Announcement

Collapse
No announcement yet.

AMD Raven Ridge Graphics On Linux vs. Lower-End NVIDIA / AMD GPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • msroadkill612
    replied
    Its a dilemma for many - APU or not?

    IF I were to be on the fence, but lean to dgpu with minimalist budget, I would spend a little more than an rx550 to get the most recent amd gpu to best match my ryzen cpu (~1600x @ ~$170?) - the ~$165 14CU rx560, is the cheapest card sharing the 14nm Polaris architecture with Vega's immediate predecessors.

    In the absence of any cheap vega dgpuS, I would like to stick close to the zen/vega apu ecosystem.

    I would bank on future synergies with the ~sibling AMD processors, over any immediate fps benefits an older or Nvidia architecture card may currently have.

    With 8GB, it would be a more solid worker than an 8GB 2400g, for a fair ~$160 premium.

    Leave a comment:


  • rdeleonp
    replied
    Originally posted by bridgman View Post

    We did some testing a year or so ago as part of an effort to convince our business folks to stop promoting APUs paired with weak dGPUs, and at the time we did still find that larger carveout did improve performance. My takeaway was that the main reason people were thinking a small dGPU was faster than APU graphics was that the default carveout on APU was so low (32-80MB per Microsoft requirement), and that configuring a system with APU only was preferable as long as some or all of the power and thermal budget previously given to the dGPU could be given to the APU instead.

    That said, the linked article focused on games whose VRAM requirements were larger than the largest carveout option (2-3GB) and so missed the "everything fits in emulated VRAM" scenario which is still pretty common for games.

    My current (albeit unconfirmed) understanding is that Raven should not show much performance difference between emulated VRAM and system memory (earlier APUs were more like 2:1, with Carrizo somewhere in between) and so "automatic migration to VRAM where possible" could probably be disabled completely, but I don't know how much of this is reflected in current drivers. There has been some ongoing work related to migration but my impression was that it was more related to dGPU than APU.

    So definitely would be interesting to see. I believe Raven is the first APU where emulated VRAM and system memory really could have the same performance, since all of the accesses go through the same data paths (the common data fabric) anyways.
    Within margin of error, that seems to be the case:

    RAVEN RIDGE BUFFER

    Leave a comment:


  • rene
    replied
    Originally posted by edwaleni View Post

    If AMD dies you would see a big movement to ARM for those avoiding Intel. (Via won't/cant respond) Qualcomm would probably step in to the vacuum. AMD isn't going to die soon, I would guess they are going to get swallowed by a much larger fish before they cease to exist.

    Siemens or Qualcomm would be the most likely entity for them to align with.

    My RR arrived yesterday, but the MSI board is in FedEx/USPS limbo at the moment. The DDR4 arrives today.

    Honestly it appears that there still needs to be a few rounds of kernel/driver updates before RR is considered optimal for Linux in general. I wont be pushing Linux onto this RR right now, but will see how it does when things settle down.
    Well, given I run Linux I can move to whatever I want: https://www.youtube.com/watch?v=AU_RV8uoTIo However, I have the impression too many people still depend on x86 binary only software and/or Windows and probably will stay w/ Intel for another decade or two, … :-/ I do not see any of those 80+% Windows users moving to ARM any time soon.

    Leave a comment:


  • edwaleni
    replied
    Originally posted by edwaleni View Post

    If AMD dies you would see a big movement to ARM for those avoiding Intel. (Via won't/cant respond) Qualcomm would probably step in to the vacuum. AMD isn't going to die soon, I would guess they are going to get swallowed by a much larger fish before they cease to exist.

    Siemens or Qualcomm would be the most likely entity for them to align with.

    My RR arrived yesterday, but the MSI board is in FedEx/USPS limbo at the moment. The DDR4 arrives today.

    Honestly it appears that there still needs to be a few rounds of kernel/driver updates before RR is considered optimal for Linux in general. I wont be pushing Linux onto this RR right now, but will see how it does when things settle down.
    Well, the RR project will have to wait a few more days. Amazon Prime sent me a butterfly phone case in 1 day instead of the DDR4 I ordered. Talk about a foobar. Back to NewEgg now.

    DDR4 prices have really gone up.

    Leave a comment:


  • angrypie
    replied
    Originally posted by grok View Post

    Do you know how long AMD was stuck with Phenom II performance or incremental gain?
    Hint : look at this A12-9800 above your post.
    Phenom II would be at least 30% slower. Stop talking out of your ass.

    Leave a comment:


  • grok
    replied
    Originally posted by NateHubbard View Post

    That your almost decade old CPU is missing an instruction isn't surprising. That you're trying to game with it is though.
    But, like you said you're overdue for an upgrade, just don't expect your new CPU to still run everything in the year 2027.
    Do you know how long AMD was stuck with Phenom II performance or incremental gain?
    Hint : look at this A12-9800 above your post.

    Leave a comment:


  • bridgman
    replied
    Originally posted by dungeon View Post
    Michael, could you push TTM like this to 64 MB VRAM buffer size to see what happens?
    We did some testing a year or so ago as part of an effort to convince our business folks to stop promoting APUs paired with weak dGPUs, and at the time we did still find that larger carveout did improve performance. My takeaway was that the main reason people were thinking a small dGPU was faster than APU graphics was that the default carveout on APU was so low (32-80MB per Microsoft requirement), and that configuring a system with APU only was preferable as long as some or all of the power and thermal budget previously given to the dGPU could be given to the APU instead.

    That said, the linked article focused on games whose VRAM requirements were larger than the largest carveout option (2-3GB) and so missed the "everything fits in emulated VRAM" scenario which is still pretty common for games.

    My current (albeit unconfirmed) understanding is that Raven should not show much performance difference between emulated VRAM and system memory (earlier APUs were more like 2:1, with Carrizo somewhere in between) and so "automatic migration to VRAM where possible" could probably be disabled completely, but I don't know how much of this is reflected in current drivers. There has been some ongoing work related to migration but my impression was that it was more related to dGPU than APU.

    So definitely would be interesting to see. I believe Raven is the first APU where emulated VRAM and system memory really could have the same performance, since all of the accesses go through the same data paths (the common data fabric) anyways.
    Last edited by bridgman; 17 February 2018, 11:49 AM.

    Leave a comment:


  • angrypie
    replied
    Originally posted by s_j_newbury View Post
    Any code which assumes SSSE3 is non-conformant and non-portable.
    Don't expect games to be portable at all.

    Devs will just optimize for the most popular hardware (i.e. Intel and NVIDIA) and be done with it, because margins are low. The patch fest to fix Windows games under Ryzen only happened because the bias was too obvious, and there was some ugliness as well.

    This is why AMD has to fight with more cores/threads: nobody cares about microarchitectural differences. Having more cores usually does the trick, but not always. Ryzen does better than the FX because it isn't as dependent on software optimizations, but that doesn't mean they aren't needed. Still, this is the message people get.

    Leave a comment:


  • Michael
    replied
    Originally posted by _ONH_ View Post
    On the other hand why did you test some GCN GPU with amdgpu 1.4.0 and others with modeseting 1.19.5? For consistency
    I was simply using what was selected by default.

    Leave a comment:


  • s_j_newbury
    replied
    Originally posted by Qaridarium

    Don't you think that a system from 8. Januar 2009 need an upgrade ~9 years later?
    Why would it? The x86-64 psABI is well defined and doesn't include any new REQUIRED instructions added in any CPUs since it was first standardised. That means SSE2 is required, but SSSE3 is OPTIONAL. Any code which assumes SSSE3 is non-conformant and non-portable. There is no guarantee it will work on any other CPU. That's why feature detection is a thing, along with function multi-versioning. Unless you're running Gentoo (where binaries are typically only local and non-distributed, or else the CPU architecture is well defined like with ChromeOS), there's no reason why any x86-64 code will not run on any x86-64 past, present or future.

    PhenomII are fast enough, and with undervolting efficient enough not to need to be replaced for most uses. Sure it would probably be the bottleneck when gaming with the latest high end GPUs, but even then I suspect an optimised Vulkan graphics engine taking advantage of all the CPU cores would work well enough.

    Even Industry or Military grade computers have a maximum life-time of 10 years.
    That's nonsense. As an extreme example the U.S. nuclear weapon command and control systems are only being replaced now, they've been running IBM Series/1 mainframes since the mid-1970s. https://en.wikipedia.org/wiki/IBM_Series/1

    Leave a comment:

Working...
X