Announcement

Collapse
No announcement yet.

AMD APU On Linux: Gallium3D Can Be 80%+ As Fast As Catalyst

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD APU On Linux: Gallium3D Can Be 80%+ As Fast As Catalyst

    Phoronix: AMD APU On Linux: Gallium3D Can Be 80%+ As Fast As Catalyst

    After running earlier this week a 21-way graphics card comparison with Intel, AMD, and NVIDIA GPUs, there were requests by some Phoronix readers to see some new APU performance numbers. For ending out November, here's new Catalyst vs. Gallium3D driver benchmarks on Ubuntu Linux for the AMD A10-6800K with its Radeon HD 8670D graphics. The results with the latest Linux kernel and Mesa are very positive towards the open-source AMD driver where in some tests the performance can nearly match Catalyst! For at least one Source Engine game, the open-source driver can now even run significantly faster than the binary driver.

    http://www.phoronix.com/vr.php?view=19425

  • curaga
    replied
    Originally posted by stqn View Post
    Now if GPUs are overclocking themselves out of the box, then maybe it’s what should be benchmarked… But the results will change according to the case ventilation and ambiant temperature and length of the test… fun.
    Both current CPUs and GPUs do it (the "turbo" mode). You're also correct that it affects benchmarking results, and many sites wondered what to do with that.

    Some planned to only benchmark with that disabled; but that has two problems 1) not all cards let you disable it 2) then it's not what most users will see.

    Leave a comment:


  • stqn
    replied
    Originally posted by Kivada View Post
    Why? Overclocking is pretty trivial these days if you don't have crippled Dell/HP/Acer/Lenovo etc. hardware.

    The top end GPUs are now overclocking themselves based on thermals. The cooler you can keep the GPU the higher it'll push itself.
    - As you said, it may not be possible on all hardware.
    - It?s not representative of the out-of-the-box performance of the hardware that most users will experiment.
    - It may require a different cooling solution than the one provided by AMD in order to keep the noise down (though arguably the same can be said of my non-overclocked i3?)
    - Different processors will (AFAIK) have different overclocking possibilities, so it?s not certain that everyone will be able to overclock this processor to 4.7*GHz.
    - Overclocking will cause the processor to consume more power and the fan to spin faster which I don?t like; I prefer to undervolt my processors and keep them at their normal frequency.

    Now if GPUs are overclocking themselves out of the box, then maybe it?s what should be benchmarked? But the results will change according to the case ventilation and ambiant temperature and length of the test? fun.

    Leave a comment:


  • Kivada
    replied
    Originally posted by stqn View Post
    I would have prefered no overclocking.
    Why? Overclocking is pretty trivial these days if you don't have crippled Dell/HP/Acer/Lenovo etc. hardware.

    The top end GPUs are now overclocking themselves based on thermals. The cooler you can keep the GPU the higher it'll push itself.

    Leave a comment:


  • stqn
    replied
    I would have prefered no overclocking.

    Leave a comment:


  • Adarion
    replied
    It's diffused in Germany and ass. in Malaysia but the bug looks interesting. These desktop artifacts are quite a nuisance.
    Thanks for the link.

    Leave a comment:


  • curaga
    replied
    Just in case you didn't know, there's an open bug for a year now, only on Black Editions, and only on China-assembled ones:
    https://bugs.freedesktop.org/show_bug.cgi?id=60389

    Leave a comment:


  • Adarion
    replied
    Hmm, it's good to see the progress. But apparently you need really recent code. On the SuSE installation (just got to openSuSE 13.1) I am supervising there are still problems with the free driver stack and an A6 5400K. Glitches and graphic deformations, garbage from time to time on the screen. And the SW components aren't that old. (Kernel 3.11.something, Mesa 9.2.2 and so on). (I should have installed Gentoo on it anyway, but then installation takes much longer and I can't come over to that box so often for regular syncing and updating.) Fglrx worked "fine" besides giving me a black screen on any real console (ctrl-alt-Fx). It is a bit sad since I had hoped for 8 months since the last SuSE that I could use the free driver stack now. But no, it is still haunted by bugs.

    Leave a comment:


  • Ericg
    replied
    Originally posted by benmoran View Post
    I thought we were talking about APUs? In which case, they are also using DDR3 system ram.
    Originally posted by Kivada;
    Wrong. All AMD APUs are using the system ram as vRAM and as such the GPU gets bottlenecked by the slower DDR3 ram as well as having to share that bandwidth with the CPU. Intel is essentially comparing a dedicated GPU in the Iris Pro 5200 to an iGPU in the AMD APUs.
    Just gonna reply to both of you at once. I mis-read what Luke said originally, Originally I thought he said AMD's "mainstream" GPU's, which would be their dedicated cards, not Intel's "mainstream gpu's." Sorry for the confusion.

    Leave a comment:


  • Kivada
    replied
    Originally posted by smitty3268 View Post
    I disagree. "Brute forcing" the problem is exactly what they needed to do. It's what their competitors have done - if you buy an AMD or NVidia part, you're buying billions of transistors. Intel always tried to go on the cheap, and they were never going to get better performance until they spent the necessary die space required.
    Wrong. Intel was intentionally held back by the US FTC to prevent them from gaining a total monopoly on all consumer and office computer hardware back in the late 90's. Intel had plans to start making a move into the dedicated GPU market, though they where stopped after releasing only one card, the i740 AGP card. The i740, while not taking any performance crowns was a solid lower midrange GPU for the era, if Intel where not stopped it wouldn't have been long till they started leveraging the game makers and the Dells of the world to use only their GPUs to force the competition out just as they had been doing with their CPUs.

    All said, the eDRAM is just an expensive brute force method to make a sub par iGPU actually stand a chance by giving it it's own memory bandwidth, give the 8670D it's own GDDR5 and watch it throughly kick the Iris Pro 5200 up and down the block.

    In any case wait till Kaveri gets released if you are looking to get an iGPU system.

    Leave a comment:

Working...
X