Announcement

Collapse
No announcement yet.

15-Way Open-Source Intel/AMD/NVIDIA GPU Comparison

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by curaga View Post
    Intel is very stable if you stick to the versions Intel recommends (eg, use Ubuntu). If you deviate, intel starts to be far more unstable than radeon.

    I can only guess that radeon gets testing from a wider base, whereas everyone at the Intel OSTC is forced to upgrade in lockstep :P
    I have been plaing gw2 through wine on archlinux with nearly latest compiled kernel and driver source for a while now, never had any problems. I love intel Might have to upgrade to the new CPU though to get more than 20FPS

    Comment


    • #22
      Arch people, eg means "for example". It does not mean Intel only works on Ubuntu.

      It all depends on how different versions you use compared to those Intel has tested. Saying you run Arch is not useful info there, useful info would be "Intel says use these versions, but my X server is foo instead, and kernel bar instead".

      Still, I'm far for the only one with these experiences; search this forum if you need more examples.

      Comment


      • #23
        Originally posted by Ericg View Post
        If the only thing you care about is GPU performance...yes. But Intel smacks AMD around on CPU performance.
        You appear to be on some seriously bad drugs, so I'll correct things for you;

        Intel: Great CPU performance, better power consumption perfectly workable and acceptable GPU performance (You wont play games on high, but low and maybe even medium should be okay), video decode support
        Intel: Very very very overpriced CPUs with no to negligible benefit.

        AMD: Okayish CPU performance, worse power consumption (REALLY bad until you start running DPM kernels if your GPU is integrated), Good and acceptable GPU performance, no video decode support YET--You need kernel 3.10 and Mesa 9.2/10.0.
        AMD: Amazing CPU's with 99.99% performance of intel for 25% the price. No brainer.
        Best GPU performance of all.
        Video decode acceleration supported by open source drivers -- aka "out of the box".

        Personally I'm sticking to Intel Integrated graphics unless I really need a discrete card, and at that point I'll get an AMD discrete, not an AMD integrated.
        Your loss....

        Comment


        • #24
          Originally posted by Ibidem View Post
          What on earth is up with the Radeon HD 6450?
          It's all about the memory bandwidth. The 6450 probably has ddr3 memory rather than gddr5 memory.

          Comment


          • #25
            Yeah, but so does the Intel integrated, using system memory, yet having many times higher fps.

            Comment


            • #26
              Originally posted by curaga View Post
              Yeah, but so does the Intel integrated, using system memory, yet having many times higher fps.
              Single channel vs. dual channel.

              Comment


              • #27
                The 6450 very likely has more than one DDR chip in there. How many pennies does it save to wire them in single channel vs dual?

                If it were one mem chip, then the savings would be clear, but with multiple chips I don't see who thought it was a good idea to pinch pennies there.

                Comment


                • #28
                  Originally posted by curaga View Post
                  The 6450 very likely has more than one DDR chip in there. How many pennies does it save to wire them in single channel vs dual?

                  If it were one mem chip, then the savings would be clear, but with multiple chips I don't see who thought it was a good idea to pinch pennies there.
                  You also need additional logic in the memory controller and don't forget about the interconnects between die and packaging, which need a lot of space. And the PCB becomes more complex. It's a fact this GPU only has a 64 bit memory interface; it almost sounds like you are doubting that?
                  Last edited by brent; 02 July 2013, 04:53 PM.

                  Comment


                  • #29
                    Originally posted by brent View Post
                    You also need additional logic in the memory controller and don't forget about the interconnects between die and packaging, which need a lot of space. And the PCB becomes more complex. It's a fact this GPU only has a 64 bit memory interface; it almost sounds like you are doubting that?
                    Also I'm pretty sure each individual chip has only a 16bit interface, so it would take at least 4 chips or more to get 64bits of bandwidth.

                    Comment


                    • #30
                      The other thing to remember is that integrated GPUs are getting more powerful every year -- you can't automatically assume that older dGPUs are more powerful than integrated GPUs any more. AFAIK the Intel HD 4600 has the same number of shader ALUs as the HD 6450 dGPU and a higher engine clock.

                      I *think* the ROPs on the HD 4600 are 4 pixels wide (and there are 2 ROPs) so 2x the 6450 there. Combine that with wider / faster memory as well (128-bit vs 64-bit) and it seems to me that the HD 4600 *should* be faster than the HD 6450.

                      The GPU in Trinity/Richland (>2x the ALU count, wider ROPs, 2 channel memory) is a better comparison.
                      Last edited by bridgman; 02 July 2013, 10:36 PM.
                      Test signature

                      Comment

                      Working...
                      X