Announcement

Collapse
No announcement yet.

AMD Vega 8 Graphics Performance On Linux With The Ryzen 3 2200G

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • schmidtbag
    replied
    Originally posted by starshipeleven View Post
    GPUs in general work best with high-bandwith RAM, and that's why GPUs have GDDR (which is NOT the same as DDR used in RAM DIMMs). GDDR has higher latencies, but has significantly more bandwith.
    I agree, but at least when it comes to Haswell's graphics, it isn't starving for bandwidth. Intel's Iris Pro graphics might be more demanding, though.
    The article you see should already hint that faster RAM helps, and that does so more detectably than CPU performance (which is probably like 1-2%), the issue is that even with an OC the iGPU remains starved for memory.
    It did have a performance improvement, but a very disproportionate one. When the memory clock is 66% faster but the best-case performance gain was only 10%, the GPU is not starving for bandwidth. It may also help to point out that 38% faster RAM was also roughly 10% faster.
    Meanwhile, if you check Phoronix articles with AMD APUs tested with different memory speeds, the performance gain is directly proportionate to the memory speed across each frequency increase. This suggests those GPUs were completely bottlenecked.
    If the bottleneck was the GPU itself, adding faster RAM would not have helped much (like overclocking RAM for CPU loads).
    But that's the thing - adding faster RAM didn't help much.
    Keep in mind, depending on the test you do, faster RAM will just about always improve performance. However, that doesn't mean the RAM is significantly bottlenecking the processor in real-world tests.
    In other words, if a 66% RAM frequency increase results in a roughly 66% performance increase, the processor is starving for bandwidth. But when either a 38% or 66% increase results in a 10% performance increase, the RAM isn't enough of a bottleneck to worry about.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by schmidtbag View Post
    Do you have sources on this? I'm not saying you're wrong, I'm legitimately curious about this.
    I found this one article from Anandtech testing a Haswell IGP on different memory speeds, and it only yielded up to a 10% improvement for a 66% clock increase, using DDR3. That's definitely not starving for bandwidth, but rather just the GPU being opportunistic in the higher clocks. Intel has improved their graphics since Haswell, but, I doubt they made enough differences where DDR4 speeds are insufficient. But - this is why I ask for your sources, since maybe there's something I don't know.
    GPUs in general work best with high-bandwith RAM, and that's why GPUs have GDDR (which is NOT the same as DDR used in RAM DIMMs). GDDR has higher latencies, but has significantly more bandwith.

    The article you see should already hint that faster RAM helps, and that does so more detectably than CPU performance (which is probably like 1-2%), the issue is that even with an OC the iGPU remains starved for memory.

    If the bottleneck was the GPU itself, adding faster RAM would not have helped much (like overclocking RAM for CPU loads).
    Last edited by starshipeleven; 15 February 2018, 01:00 PM.

    Leave a comment:


  • agd5f
    replied
    Originally posted by duby229 View Post

    I'm sure you're right, but that's not what I read. I've read that TTM has some serious design flaws for dealing with system RAM as graphics RAM. And that's a problem for integrated graphics. It's not that TTM can't do it exactly, just that it's not the right design for it.
    That was the argument that intel used when they decided to drop TTM and implement GEM in the first place years ago, but that is certainly not the case these days (and it arguably wasn't the case back then; TTM was originally written to support intel hardware). Intel could arguably use TTM just fine, but I doubt it's worth the effort for them at this point. TTM works just fine for APUs.

    Leave a comment:


  • duby229
    replied
    Originally posted by psycho_driver View Post

    Based on all the windows reviews I've been reading these things will run hotter than their intel counterparts (proud tradition for AMD). However, I've also seen some overclocking results where they were running in the upper 80s and still stable.
    That's only because AMD labels their products with its maximum TDP, where-as Intel labels their products with its average TDP. The actual metric does not have the same meaning between those two venders.

    EDIT: Both companies design their products to reach their TDP goals, but since the metric means something different, they perform different accordingly. AMD's metric is its maximum, Intel's is its average. AMD scales to the highest it can, Intel scales to an average. It makes total sense why thermal properties are different.
    Last edited by duby229; 15 February 2018, 11:34 AM.

    Leave a comment:


  • psycho_driver
    replied
    Originally posted by M@GOid View Post
    Michael, if your motherboard have the setting, it would be nice to test the 45W setup that those APUs have, to see how much it looses on performance, and how much it affects temperature and power consumption.

    I'm almost buying one of this case for a HTPC, and since it is cramped inside, the lower the heat, the better.
    Based on all the windows reviews I've been reading these things will run hotter than their intel counterparts (proud tradition for AMD). However, I've also seen some overclocking results where they were running in the upper 80s and still stable.

    Leave a comment:


  • duby229
    replied
    Originally posted by agd5f View Post

    Note that there are two aspects to GEM, the API and the memory manager. Most drm drivers use the GEM API for buffer bookkeeping, but mainly only intel uses the memory manager part. TTM also provides memory manager capabilities. So TTM and the GEM memory manager stuff are largely equivalent. TTM works fine for integrated cards.
    I'm sure you're right, but that's not what I read. I've read that TTM has some serious design flaws for dealing with system RAM as graphics RAM. And that's a problem for integrated graphics. It's not that TTM can't do it exactly, just that it's not the right design for it.

    Leave a comment:


  • psycho_driver
    replied
    Originally posted by wizard69 View Post

    The issue with memory spped has been there since day one of the first APU. This shoild surprise no one. It is the reason i want to see an APU with HBM built in.
    HBM would make a lot more sense for the 2400g. If you're wanting to use it as an APU there's really no reason to spend $70 (or even $30) more for that processor vs. the 2200g.

    Leave a comment:


  • psycho_driver
    replied
    This part has the potential to be a big winner for AMD. A downright decent for almost any use case quad core APU for $99? Finally we could see some good, cheap PCs on the market, which is something which has been sorely needed for a while. Memory prices just need to come down to make it a reality.

    Leave a comment:


  • agd5f
    replied
    Originally posted by duby229 View Post

    I was under the impression that was just an aperture, but the driver will allocate as much RAM as the driver actually needs. Is that wrong? In fact I'm pretty sure that was the whole point of GEM, Because TTM couldn't do it so GEM was devised to sort of "fill in the blanks" so to speak. It was specifically made for integrated graphics sharing system RAM. TTM is fine all by itself for discrete cards, but GEM wwas needed to make integrated cards viable.
    Note that there are two aspects to GEM, the API and the memory manager. Most drm drivers use the GEM API for buffer bookkeeping, but mainly only intel uses the memory manager part. TTM also provides memory manager capabilities. So TTM and the GEM memory manager stuff are largely equivalent. TTM works fine for integrated cards.

    Leave a comment:


  • schmidtbag
    replied
    Originally posted by starshipeleven View Post
    Even Intel iGPUs are memory-starved, go figure how starved are APUs.
    Do you have sources on this? I'm not saying you're wrong, I'm legitimately curious about this.
    I found this one article from Anandtech testing a Haswell IGP on different memory speeds, and it only yielded up to a 10% improvement for a 66% clock increase, using DDR3. That's definitely not starving for bandwidth, but rather just the GPU being opportunistic in the higher clocks. Intel has improved their graphics since Haswell, but, I doubt they made enough differences where DDR4 speeds are insufficient. But - this is why I ask for your sources, since maybe there's something I don't know.

    Leave a comment:

Working...
X