Announcement

Collapse
No announcement yet.

AMD Vega 8 Graphics Performance On Linux With The Ryzen 3 2200G

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by [email protected] View Post
    But when I said I'm almost buying one of those things is that I do not have much faith on their PSU. Over the years building PCs I learned that the secret for reliability is a good PSU. And the contraptions those cases have makes me rise a eyebrow.
    Ah, I see you are a man of culture as well. I've been into these things for a bit, maybe I can help.
    I know a source of good DC-DC PSUs (and battery boards and UPS boards) http://www.mini-box.com/DC-DC

    If you are looking for badass 400W DC-DC PSUs (and smaller ones) look here https://www.hdplex.com/

    Then of course you need a quality AC-DC power brick because the PSUs above are DC only (12 to 24V or similar), but these are relatively easy to come by, Delta brand is good, or Dell power bricks for their mobile workstations and gaming laptops. (also on the second site above they have some)

    I'm personally a fan of cases where the GPU is mounted flat, with a riser/ribbon (not at 90 degree angle like in normal cases), top of the line is this http://nfc-systems.com/skyreach-4-mini/ but with the above PSUs you can retrofit any other case designed for a Flex Atx PSU and you will be fine.

    You can also find other flat cases where you can fit a SFX PSU (smaller than atx, but it has a 100mm or 120mm fan, not a tiny 5000RPM one), and a far longer GPU than you would fit in a cube case.
    https://www.techspot.com/review/1062...z02/page3.html
    Last edited by starshipeleven; 15 February 2018, 10:00 AM.

    Comment


    • #22
      Originally posted by starshipeleven View Post
      Even Intel iGPUs are memory-starved, go figure how starved are APUs.
      Do you have sources on this? I'm not saying you're wrong, I'm legitimately curious about this.
      I found this one article from Anandtech testing a Haswell IGP on different memory speeds, and it only yielded up to a 10% improvement for a 66% clock increase, using DDR3. That's definitely not starving for bandwidth, but rather just the GPU being opportunistic in the higher clocks. Intel has improved their graphics since Haswell, but, I doubt they made enough differences where DDR4 speeds are insufficient. But - this is why I ask for your sources, since maybe there's something I don't know.

      Comment


      • #23
        Originally posted by duby229 View Post

        I was under the impression that was just an aperture, but the driver will allocate as much RAM as the driver actually needs. Is that wrong? In fact I'm pretty sure that was the whole point of GEM, Because TTM couldn't do it so GEM was devised to sort of "fill in the blanks" so to speak. It was specifically made for integrated graphics sharing system RAM. TTM is fine all by itself for discrete cards, but GEM wwas needed to make integrated cards viable.
        Note that there are two aspects to GEM, the API and the memory manager. Most drm drivers use the GEM API for buffer bookkeeping, but mainly only intel uses the memory manager part. TTM also provides memory manager capabilities. So TTM and the GEM memory manager stuff are largely equivalent. TTM works fine for integrated cards.

        Comment


        • #24
          This part has the potential to be a big winner for AMD. A downright decent for almost any use case quad core APU for $99? Finally we could see some good, cheap PCs on the market, which is something which has been sorely needed for a while. Memory prices just need to come down to make it a reality.

          Comment


          • #25
            Originally posted by wizard69 View Post

            The issue with memory spped has been there since day one of the first APU. This shoild surprise no one. It is the reason i want to see an APU with HBM built in.
            HBM would make a lot more sense for the 2400g. If you're wanting to use it as an APU there's really no reason to spend $70 (or even $30) more for that processor vs. the 2200g.

            Comment


            • #26
              Originally posted by agd5f View Post

              Note that there are two aspects to GEM, the API and the memory manager. Most drm drivers use the GEM API for buffer bookkeeping, but mainly only intel uses the memory manager part. TTM also provides memory manager capabilities. So TTM and the GEM memory manager stuff are largely equivalent. TTM works fine for integrated cards.
              I'm sure you're right, but that's not what I read. I've read that TTM has some serious design flaws for dealing with system RAM as graphics RAM. And that's a problem for integrated graphics. It's not that TTM can't do it exactly, just that it's not the right design for it.

              Comment


              • #27
                Originally posted by [email protected] View Post
                Michael, if your motherboard have the setting, it would be nice to test the 45W setup that those APUs have, to see how much it looses on performance, and how much it affects temperature and power consumption.

                I'm almost buying one of this case for a HTPC, and since it is cramped inside, the lower the heat, the better.
                Based on all the windows reviews I've been reading these things will run hotter than their intel counterparts (proud tradition for AMD). However, I've also seen some overclocking results where they were running in the upper 80s and still stable.

                Comment


                • #28
                  Originally posted by psycho_driver View Post

                  Based on all the windows reviews I've been reading these things will run hotter than their intel counterparts (proud tradition for AMD). However, I've also seen some overclocking results where they were running in the upper 80s and still stable.
                  That's only because AMD labels their products with its maximum TDP, where-as Intel labels their products with its average TDP. The actual metric does not have the same meaning between those two venders.

                  EDIT: Both companies design their products to reach their TDP goals, but since the metric means something different, they perform different accordingly. AMD's metric is its maximum, Intel's is its average. AMD scales to the highest it can, Intel scales to an average. It makes total sense why thermal properties are different.
                  Last edited by duby229; 15 February 2018, 11:34 AM.

                  Comment


                  • #29
                    Originally posted by duby229 View Post

                    I'm sure you're right, but that's not what I read. I've read that TTM has some serious design flaws for dealing with system RAM as graphics RAM. And that's a problem for integrated graphics. It's not that TTM can't do it exactly, just that it's not the right design for it.
                    That was the argument that intel used when they decided to drop TTM and implement GEM in the first place years ago, but that is certainly not the case these days (and it arguably wasn't the case back then; TTM was originally written to support intel hardware). Intel could arguably use TTM just fine, but I doubt it's worth the effort for them at this point. TTM works just fine for APUs.

                    Comment


                    • #30
                      Originally posted by schmidtbag View Post
                      Do you have sources on this? I'm not saying you're wrong, I'm legitimately curious about this.
                      I found this one article from Anandtech testing a Haswell IGP on different memory speeds, and it only yielded up to a 10% improvement for a 66% clock increase, using DDR3. That's definitely not starving for bandwidth, but rather just the GPU being opportunistic in the higher clocks. Intel has improved their graphics since Haswell, but, I doubt they made enough differences where DDR4 speeds are insufficient. But - this is why I ask for your sources, since maybe there's something I don't know.
                      GPUs in general work best with high-bandwith RAM, and that's why GPUs have GDDR (which is NOT the same as DDR used in RAM DIMMs). GDDR has higher latencies, but has significantly more bandwith.

                      The article you see should already hint that faster RAM helps, and that does so more detectably than CPU performance (which is probably like 1-2%), the issue is that even with an OC the iGPU remains starved for memory.

                      If the bottleneck was the GPU itself, adding faster RAM would not have helped much (like overclocking RAM for CPU loads).
                      Last edited by starshipeleven; 15 February 2018, 01:00 PM.

                      Comment

                      Working...
                      X