Announcement

Collapse
No announcement yet.

AMD APU On Linux: Gallium3D Can Be 80%+ As Fast As Catalyst

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by mmstick View Post
    Not much, most of the energy consumed in a laptop is actually from the display.
    That's not necessarily true, especially when idling (but with screen on). cpus have gotten so much better about getting to lower, and lower energy states. Looking forward to the, hopefully not too far off, day when we cpus can reliably call upon near threshold voltages.

    Comment


    • #42
      Originally posted by Sonadow View Post
      off-topic: How the heck did intel suddenly catch up so quickly in the graphics department still remains a mystery.
      It wasn't quick at all. They've been trying for 5 years now, and still haven't really caught up, they're just in the competitive neighborhood now.

      Comment


      • #43
        Originally posted by smitty3268 View Post
        It wasn't quick at all. They've been trying for 5 years now, and still haven't really caught up, they're just in the competitive neighborhood now.
        I wouldn't really call "We brute forced the problem in order so that we can finally try to say that we beat AMD with extremely high bin limited run chips designed solely for the purpose, while our mainstream GPU is still actually far behind" and basically failing at that being "in the competitive neighborhood" even.

        Comment


        • #44
          Originally posted by Luke_Wolf View Post
          I wouldn't really call "We brute forced the problem in order so that we can finally try to say that we beat AMD with extremely high bin limited run chips designed solely for the purpose, while our mainstream GPU is still actually far behind" and basically failing at that being "in the competitive neighborhood" even.
          Using edRAM is probably a fairly 'balanced' thing for Intel since they have to rely solely on DDR3 system RAM, meanwhile AMD's cards have 1-2Gbs of GDDR5 to play with. And while Intel's "mainstream" GPU isn't the greatest (we'll see how things are with Broadwell vs Kavari, Broadwell's supposed to do to the GPU what Haswell did for power efficiency.) it is definitely very usable for the average user, even a light-gaming user, or a movie-watcher. Hell I'm using -Sandy Bridge- to run 1080p flash video and its perfectly fine. I would LOVE to have Broadwell or Skylake in my next laptop...
          All opinions are my own not those of my employer if you know who they are.

          Comment


          • #45
            Originally posted by Luke_Wolf View Post
            I wouldn't really call "We brute forced the problem in order so that we can finally try to say that we beat AMD with extremely high bin limited run chips designed solely for the purpose, while our mainstream GPU is still actually far behind" and basically failing at that being "in the competitive neighborhood" even.
            I disagree. "Brute forcing" the problem is exactly what they needed to do. It's what their competitors have done - if you buy an AMD or NVidia part, you're buying billions of transistors. Intel always tried to go on the cheap, and they were never going to get better performance until they spent the necessary die space required.

            And their 4600 GPU is still in the neighborhood of being competitive with the extreme low end parts they are competing with. No gamer would ever want them, but someone who plays a bit of the Sims occasionally would be OK. We'll see where they go from here, but for the last 5 years they've gone from embarassing, to just terrible, to very bad, to bad, to nearly not bad. Hopefully that trajectory continues, as I'd like to see 3 competitors instead of just 2.

            Comment


            • #46
              Originally posted by Sonadow View Post
              There's also the point of power consumption. Where, traditionally, Intel doesn't just handily beat AMD in this department; it knocks it to the ground and rubs its face in the dirt.

              Which, to some people, is a high enough priority to justify shelling out that small fortune.
              It's not going to use $650 less power. Even the 100w rated AMD APUs are just barely over the 65w bracket, AMD just uses 100w so that the mobo manufacturers don't build boards that can't use something beefier should AMD release it. They already had that problem with 140w CPUs when most mobos where only designed for 95w or 125w if you where lucky.

              It just makes it eaisier for everyone if the mobo is overspec and adds almost nothing to the cost. Replacing the CPU with a different model doesn't require a reinstall of the OS, but replacing the mobo with a different model can.

              Comment


              • #47
                Originally posted by Ericg View Post
                Using edRAM is probably a fairly 'balanced' thing for Intel since they have to rely solely on DDR3 system RAM, meanwhile AMD's cards have 1-2Gbs of GDDR5 to play with. And while Intel's "mainstream" GPU isn't the greatest (we'll see how things are with Broadwell vs Kavari, Broadwell's supposed to do to the GPU what Haswell did for power efficiency.) it is definitely very usable for the average user, even a light-gaming user, or a movie-watcher. Hell I'm using -Sandy Bridge- to run 1080p flash video and its perfectly fine. I would LOVE to have Broadwell or Skylake in my next laptop...
                I thought we were talking about APUs? In which case, they are also using DDR3 system ram.


                And not directed at you:
                I like that Intel is trying to improve their graphics, but the top end Iris stuff seems to only be made for one purpose, which is winning benchmarks. If the price was magically the same as the Intel i3s, which is what the AMD APUs are really competing against, they Intel would never be able to build these in quantity. They couldn't get enough of the RAM.

                To me, it's the same thing as when car manufacturers send their cars to reviewers, complete with the entire option package installed (that usually costs almost as much as the car itself). It looks good in reviews, but it's not the same thing as most people will drive off of the lot.

                Comment


                • #48
                  Originally posted by Ericg View Post
                  Using edRAM is probably a fairly 'balanced' thing for Intel since they have to rely solely on DDR3 system RAM, meanwhile AMD's cards have 1-2Gbs of GDDR5 to play with.
                  Wrong. All AMD APUs are using the system ram as vRAM and as such the GPU gets bottlenecked by the slower DDR3 ram as well as having to share that bandwidth with the CPU. Intel is essentially comparing a dedicated GPU in the Iris Pro 5200 to an iGPU in the AMD APUs.

                  Comment


                  • #49
                    Originally posted by smitty3268 View Post
                    I disagree. "Brute forcing" the problem is exactly what they needed to do. It's what their competitors have done - if you buy an AMD or NVidia part, you're buying billions of transistors. Intel always tried to go on the cheap, and they were never going to get better performance until they spent the necessary die space required.
                    Wrong. Intel was intentionally held back by the US FTC to prevent them from gaining a total monopoly on all consumer and office computer hardware back in the late 90's. Intel had plans to start making a move into the dedicated GPU market, though they where stopped after releasing only one card, the i740 AGP card. The i740, while not taking any performance crowns was a solid lower midrange GPU for the era, if Intel where not stopped it wouldn't have been long till they started leveraging the game makers and the Dells of the world to use only their GPUs to force the competition out just as they had been doing with their CPUs.

                    All said, the eDRAM is just an expensive brute force method to make a sub par iGPU actually stand a chance by giving it it's own memory bandwidth, give the 8670D it's own GDDR5 and watch it throughly kick the Iris Pro 5200 up and down the block.

                    In any case wait till Kaveri gets released if you are looking to get an iGPU system.

                    Comment


                    • #50
                      Originally posted by benmoran View Post
                      I thought we were talking about APUs? In which case, they are also using DDR3 system ram.
                      Originally posted by Kivada;
                      Wrong. All AMD APUs are using the system ram as vRAM and as such the GPU gets bottlenecked by the slower DDR3 ram as well as having to share that bandwidth with the CPU. Intel is essentially comparing a dedicated GPU in the Iris Pro 5200 to an iGPU in the AMD APUs.
                      Just gonna reply to both of you at once. I mis-read what Luke said originally, Originally I thought he said AMD's "mainstream" GPU's, which would be their dedicated cards, not Intel's "mainstream gpu's." Sorry for the confusion.
                      All opinions are my own not those of my employer if you know who they are.

                      Comment

                      Working...
                      X