Announcement

Collapse
No announcement yet.

AMD APU On Linux: Gallium3D Can Be 80%+ As Fast As Catalyst

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Originally posted by Sonadow View Post
    There's also the point of power consumption. Where, traditionally, Intel doesn't just handily beat AMD in this department; it knocks it to the ground and rubs its face in the dirt.

    Which, to some people, is a high enough priority to justify shelling out that small fortune.
    It's not going to use $650 less power. Even the 100w rated AMD APUs are just barely over the 65w bracket, AMD just uses 100w so that the mobo manufacturers don't build boards that can't use something beefier should AMD release it. They already had that problem with 140w CPUs when most mobos where only designed for 95w or 125w if you where lucky.

    It just makes it eaisier for everyone if the mobo is overspec and adds almost nothing to the cost. Replacing the CPU with a different model doesn't require a reinstall of the OS, but replacing the mobo with a different model can.

    Comment


    • #47
      Originally posted by Ericg View Post
      Using edRAM is probably a fairly 'balanced' thing for Intel since they have to rely solely on DDR3 system RAM, meanwhile AMD's cards have 1-2Gbs of GDDR5 to play with. And while Intel's "mainstream" GPU isn't the greatest (we'll see how things are with Broadwell vs Kavari, Broadwell's supposed to do to the GPU what Haswell did for power efficiency.) it is definitely very usable for the average user, even a light-gaming user, or a movie-watcher. Hell I'm using -Sandy Bridge- to run 1080p flash video and its perfectly fine. I would LOVE to have Broadwell or Skylake in my next laptop...
      I thought we were talking about APUs? In which case, they are also using DDR3 system ram.


      And not directed at you:
      I like that Intel is trying to improve their graphics, but the top end Iris stuff seems to only be made for one purpose, which is winning benchmarks. If the price was magically the same as the Intel i3s, which is what the AMD APUs are really competing against, they Intel would never be able to build these in quantity. They couldn't get enough of the RAM.

      To me, it's the same thing as when car manufacturers send their cars to reviewers, complete with the entire option package installed (that usually costs almost as much as the car itself). It looks good in reviews, but it's not the same thing as most people will drive off of the lot.

      Comment


      • #48
        Originally posted by Ericg View Post
        Using edRAM is probably a fairly 'balanced' thing for Intel since they have to rely solely on DDR3 system RAM, meanwhile AMD's cards have 1-2Gbs of GDDR5 to play with.
        Wrong. All AMD APUs are using the system ram as vRAM and as such the GPU gets bottlenecked by the slower DDR3 ram as well as having to share that bandwidth with the CPU. Intel is essentially comparing a dedicated GPU in the Iris Pro 5200 to an iGPU in the AMD APUs.

        Comment


        • #49
          Originally posted by smitty3268 View Post
          I disagree. "Brute forcing" the problem is exactly what they needed to do. It's what their competitors have done - if you buy an AMD or NVidia part, you're buying billions of transistors. Intel always tried to go on the cheap, and they were never going to get better performance until they spent the necessary die space required.
          Wrong. Intel was intentionally held back by the US FTC to prevent them from gaining a total monopoly on all consumer and office computer hardware back in the late 90's. Intel had plans to start making a move into the dedicated GPU market, though they where stopped after releasing only one card, the i740 AGP card. The i740, while not taking any performance crowns was a solid lower midrange GPU for the era, if Intel where not stopped it wouldn't have been long till they started leveraging the game makers and the Dells of the world to use only their GPUs to force the competition out just as they had been doing with their CPUs.

          All said, the eDRAM is just an expensive brute force method to make a sub par iGPU actually stand a chance by giving it it's own memory bandwidth, give the 8670D it's own GDDR5 and watch it throughly kick the Iris Pro 5200 up and down the block.

          In any case wait till Kaveri gets released if you are looking to get an iGPU system.

          Comment


          • #50
            Originally posted by benmoran View Post
            I thought we were talking about APUs? In which case, they are also using DDR3 system ram.
            Originally posted by Kivada;
            Wrong. All AMD APUs are using the system ram as vRAM and as such the GPU gets bottlenecked by the slower DDR3 ram as well as having to share that bandwidth with the CPU. Intel is essentially comparing a dedicated GPU in the Iris Pro 5200 to an iGPU in the AMD APUs.
            Just gonna reply to both of you at once. I mis-read what Luke said originally, Originally I thought he said AMD's "mainstream" GPU's, which would be their dedicated cards, not Intel's "mainstream gpu's." Sorry for the confusion.

            Comment


            • #51
              Hmm, it's good to see the progress. But apparently you need really recent code. On the SuSE installation (just got to openSuSE 13.1) I am supervising there are still problems with the free driver stack and an A6 5400K. Glitches and graphic deformations, garbage from time to time on the screen. And the SW components aren't that old. (Kernel 3.11.something, Mesa 9.2.2 and so on). (I should have installed Gentoo on it anyway, but then installation takes much longer and I can't come over to that box so often for regular syncing and updating.) Fglrx worked "fine" besides giving me a black screen on any real console (ctrl-alt-Fx). It is a bit sad since I had hoped for 8 months since the last SuSE that I could use the free driver stack now. But no, it is still haunted by bugs.

              Comment


              • #52
                Just in case you didn't know, there's an open bug for a year now, only on Black Editions, and only on China-assembled ones:
                https://bugs.freedesktop.org/show_bug.cgi?id=60389

                Comment


                • #53
                  It's diffused in Germany and ass. in Malaysia but the bug looks interesting. These desktop artifacts are quite a nuisance.
                  Thanks for the link.

                  Comment


                  • #54
                    I would have prefered no overclocking.

                    Comment


                    • #55
                      Originally posted by stqn View Post
                      I would have prefered no overclocking.
                      Why? Overclocking is pretty trivial these days if you don't have crippled Dell/HP/Acer/Lenovo etc. hardware.

                      The top end GPUs are now overclocking themselves based on thermals. The cooler you can keep the GPU the higher it'll push itself.

                      Comment


                      • #56
                        Originally posted by Kivada View Post
                        Why? Overclocking is pretty trivial these days if you don't have crippled Dell/HP/Acer/Lenovo etc. hardware.

                        The top end GPUs are now overclocking themselves based on thermals. The cooler you can keep the GPU the higher it'll push itself.
                        - As you said, it may not be possible on all hardware.
                        - Itís not representative of the out-of-the-box performance of the hardware that most users will experiment.
                        - It may require a different cooling solution than the one provided by AMD in order to keep the noise down (though arguably the same can be said of my non-overclocked i3Ö)
                        - Different processors will (AFAIK) have different overclocking possibilities, so itís not certain that everyone will be able to overclock this processor to 4.7*GHz.
                        - Overclocking will cause the processor to consume more power and the fan to spin faster which I donít like; I prefer to undervolt my processors and keep them at their normal frequency.

                        Now if GPUs are overclocking themselves out of the box, then maybe itís what should be benchmarkedÖ But the results will change according to the case ventilation and ambiant temperature and length of the testÖ fun.

                        Comment


                        • #57
                          Originally posted by stqn View Post
                          Now if GPUs are overclocking themselves out of the box, then maybe it’s what should be benchmarked… But the results will change according to the case ventilation and ambiant temperature and length of the test… fun.
                          Both current CPUs and GPUs do it (the "turbo" mode). You're also correct that it affects benchmarking results, and many sites wondered what to do with that.

                          Some planned to only benchmark with that disabled; but that has two problems 1) not all cards let you disable it 2) then it's not what most users will see.

                          Comment

                          Working...
                          X