Announcement

Collapse
No announcement yet.

15-Way Open-Source Intel/AMD/NVIDIA GPU Comparison

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by chrisb View Post
    A lot of people care about money too. AMD looks much better if the metric is performance per dollar: http://www.cpubenchmark.net/cpu_value_available.html
    yes, if you go for the mid end (completely reasonable for gaming/general purpose PC), then AMD CPUs give you more bang for your buck.

    And as far as the data you quoted: don't forget that Intel mobos are also more expensive (or much much more expensive if you want multiple x16 PCIe and 6+ SATA ports) so the gap is even wider.

    Comment


    • #17
      Originally posted by curaga View Post
      Intel is very stable if you stick to the versions Intel recommends (eg, use Ubuntu).
      https://01.org/linuxgraphics/downloads

      Or use fedora

      Comment


      • #18
        Originally posted by curaga View Post
        Intel is very stable if you stick to the versions Intel recommends (eg, use Ubuntu). If you deviate, intel starts to be far more unstable than radeon.
        I'm an Arch Linux user, both gnome and kde (and games such as xonotix and HoN) work without a hitch on my Ivy 3570K with mesa stable.

        Comment


        • #19
          What's up with the 6870 versus the 6950?

          The 6950 should have better performance from a hardware perspective, i.e. double the RAM and more stream processors, but in almost every benchmark in the article the 6870 seems faster. Any idea why?

          Comment


          • #20
            Originally posted by Tillin9 View Post
            The 6950 should have better performance from a hardware perspective, i.e. double the RAM and more stream processors, but in almost every benchmark in the article the 6870 seems faster. Any idea why?
            Don't know for sure, but the two most likely possibilities seem to be (a) the 6950 has a lower clock and same number of ROPs so lower raw fill rate, (b) the shader translator / compiler might not be as well optimized for VLIW4 as for VLIW5.

            EDIT -- didn't see an indication in the article re: whether Vadim's shader backend compiler was used, guess it's whatever is default in F19 (and I don't know what the default is).
            Last edited by bridgman; 07-02-2013, 12:12 AM.

            Comment


            • #21
              Originally posted by curaga View Post
              Intel is very stable if you stick to the versions Intel recommends (eg, use Ubuntu). If you deviate, intel starts to be far more unstable than radeon.

              I can only guess that radeon gets testing from a wider base, whereas everyone at the Intel OSTC is forced to upgrade in lockstep :P
              I have been plaing gw2 through wine on archlinux with nearly latest compiled kernel and driver source for a while now, never had any problems. I love intel Might have to upgrade to the new CPU though to get more than 20FPS

              Comment


              • #22
                Arch people, eg means "for example". It does not mean Intel only works on Ubuntu.

                It all depends on how different versions you use compared to those Intel has tested. Saying you run Arch is not useful info there, useful info would be "Intel says use these versions, but my X server is foo instead, and kernel bar instead".

                Still, I'm far for the only one with these experiences; search this forum if you need more examples.

                Comment


                • #23
                  Originally posted by Ericg View Post
                  If the only thing you care about is GPU performance...yes. But Intel smacks AMD around on CPU performance.
                  You appear to be on some seriously bad drugs, so I'll correct things for you;

                  Intel: Great CPU performance, better power consumption perfectly workable and acceptable GPU performance (You wont play games on high, but low and maybe even medium should be okay), video decode support
                  Intel: Very very very overpriced CPUs with no to negligible benefit.

                  AMD: Okayish CPU performance, worse power consumption (REALLY bad until you start running DPM kernels if your GPU is integrated), Good and acceptable GPU performance, no video decode support YET--You need kernel 3.10 and Mesa 9.2/10.0.
                  AMD: Amazing CPU's with 99.99% performance of intel for 25% the price. No brainer.
                  Best GPU performance of all.
                  Video decode acceleration supported by open source drivers -- aka "out of the box".

                  Personally I'm sticking to Intel Integrated graphics unless I really need a discrete card, and at that point I'll get an AMD discrete, not an AMD integrated.
                  Your loss....

                  Comment


                  • #24
                    Originally posted by Ibidem View Post
                    What on earth is up with the Radeon HD 6450?
                    It's all about the memory bandwidth. The 6450 probably has ddr3 memory rather than gddr5 memory.

                    Comment


                    • #25
                      Yeah, but so does the Intel integrated, using system memory, yet having many times higher fps.

                      Comment


                      • #26
                        Originally posted by curaga View Post
                        Yeah, but so does the Intel integrated, using system memory, yet having many times higher fps.
                        Single channel vs. dual channel.

                        Comment


                        • #27
                          The 6450 very likely has more than one DDR chip in there. How many pennies does it save to wire them in single channel vs dual?

                          If it were one mem chip, then the savings would be clear, but with multiple chips I don't see who thought it was a good idea to pinch pennies there.

                          Comment


                          • #28
                            Originally posted by curaga View Post
                            The 6450 very likely has more than one DDR chip in there. How many pennies does it save to wire them in single channel vs dual?

                            If it were one mem chip, then the savings would be clear, but with multiple chips I don't see who thought it was a good idea to pinch pennies there.
                            You also need additional logic in the memory controller and don't forget about the interconnects between die and packaging, which need a lot of space. And the PCB becomes more complex. It's a fact this GPU only has a 64 bit memory interface; it almost sounds like you are doubting that?
                            Last edited by brent; 07-02-2013, 04:53 PM.

                            Comment


                            • #29
                              Originally posted by brent View Post
                              You also need additional logic in the memory controller and don't forget about the interconnects between die and packaging, which need a lot of space. And the PCB becomes more complex. It's a fact this GPU only has a 64 bit memory interface; it almost sounds like you are doubting that?
                              Also I'm pretty sure each individual chip has only a 16bit interface, so it would take at least 4 chips or more to get 64bits of bandwidth.

                              Comment


                              • #30
                                The other thing to remember is that integrated GPUs are getting more powerful every year -- you can't automatically assume that older dGPUs are more powerful than integrated GPUs any more. AFAIK the Intel HD 4600 has the same number of shader ALUs as the HD 6450 dGPU and a higher engine clock.

                                I *think* the ROPs on the HD 4600 are 4 pixels wide (and there are 2 ROPs) so 2x the 6450 there. Combine that with wider / faster memory as well (128-bit vs 64-bit) and it seems to me that the HD 4600 *should* be faster than the HD 6450.

                                The GPU in Trinity/Richland (>2x the ALU count, wider ROPs, 2 channel memory) is a better comparison.
                                Last edited by bridgman; 07-02-2013, 10:36 PM.

                                Comment

                                Working...
                                X