Announcement

Collapse
No announcement yet.

New AMD Radeon "Polaris" GPU Details Revealed

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by bug77 View Post

    Not so. The CPU in more recent intel chips really does use less power. It's just that intel has been working on pumping up the iGPU, hence the TDP for the whole package stays about the same.
    Considering the drop from 22nm (Haswell) to 14nm (Skylake), the power consumption is still considerably higher.
    http://techreport.com/review/28751/i...sor-reviewed/5

    At the end of the review, he measures something I do not often see other benchmarkers measure which feels very much like he's trying to protect the Intel brandname (Task energy he calls it). Regardless, even with his attempted endnote trying to save the i7-6700's reputation, it still showcases that the drop from 22nm to 14nm did nothing for power consumption and TDP (if anything it worsened things).

    I'm not pulling my statement out of thin air. Intel haven't made their 14nm fabrication efficient enough considering the die-size drop, and it can happen with AMD's Polaris as well is the point I'm trying to get through here.

    Please do not hype yourselves and expect beautiful numbers with the Linux driver when it does release. I hope for the best, but prepare for the worst always.

    Comment


    • #42
      As I understand it, Fuji Nano gives 45-50GFlops per Watt. Then a Polaris Nano must be around 120, or just 8 Watts per TFlop. So to my opinion they should target only one GPU 3-4TFlops for Laptops without Dual GPU jokes (just use the dGPU as on Desktops with Radeon mini-board internal HDMI), without many models and with easy GPU Upgrade. Then two for Desktops, one at 5-8TFlops and one at 15-20, as in the old days with one exception "no low end GPUs any more". All of this with the correct prices like 150 for the small ones and 350 for the big one. There was a time when the best X1970 had just 200. Also if they have a new CPU, better combine it with one of the above.
      Last edited by artivision; 04 January 2016, 05:17 PM.

      Comment


      • #43
        Originally posted by sabun View Post
        To my understanding, the part about 4K HEVC/H.265 encode/decode support relates directly to VCE. We don't have any support for this currently on Linux (I believe there are parts of VCE support in the Mesa driver, but seemingly insufficient to be usable as FFMPEG still doesn't support it). Feel free to correct me if I'm wrong here.
        We have supported VCE-based video encode on Linux for a while now via OpenMAX on gstreamer.

        Comment


        • #44
          Originally posted by kaprikawn View Post
          But that's out of the box performance when nothing supports DX12. DirectX 11 is the thing currently, so Nvidia has the best DX11 driver. When DX12 is actually being used, Nvidia will develop a driver that is geared towards that technology.
          People keeping saying that since no DX12 game exists that we should dismiss DX12, even though we've had benchmarks for DX12 for a while now and so far Nvidia is losing badly. And we know why because Nvidia doesn't have Async Compute. Also Vulkan isn't that far off and once Vulkan is released I'm sure Valve is going to update their games to support it. Given that with fairly modern PC hardware, anyone can run Valve's games.

          As for Nvidia developing a better DX12 driver, don't lose sleep over it. Their Async Compute is just a software implementation of what should be hardware. But yes they do have the best DX11 driver, and the best OpenGL driver. And that translates very well over to Linux, unlike AMD. What I'm hoping is that Vulkan takes away the driver advantage that Nvidia has since Vulkan is less about driver optimizations, like DX12.
          Also, I was talking about performance per watt, not straight up performance.
          I know and don't care. Unless you run your graphics card on max 24/7, the wattage doesn't matter. Which at that point you're either playing too much Fallout 4 or you're a bit coin farmer.
          I've got a fairly strong gaming card on my Windows PC (770gtx) and it's barely audible. I have no interest in raw performance if it sounds like a harrier jumpjet taking off which is the case with my 7970 I've got in my linux box (not directly comparible, I know, but they're fairly close in performance). If I'm correct and Nvidia can deliver similar performance with a quieter card then I'll be using them (even if it comes at a premium, PC master race and all that).
          Cooling on a card is something you take up with the card manufacturer like Asus, XFX, or whatever. If the cooler sounds like a harrier jet, it isn't AMD's fault. Just like Nvidia isn't at fault when EVGA used this mess of a cooler for their 970's. A lot more goes into a graphics card than just AMD and Nvidia.

          Comment


          • #45
            I wonder if Intel's decision to not do Iris Pro on consumer 6th gen desktop CPUs will help AMD's cause.... or is there some kind of major flaw in Iris Pro I haven't heard of.

            Comment


            • #46
              Originally posted by blackout23 View Post


              In the Benchmarks I have seen a Fury X was still around 10% slower than a 980 Ti in both 1080p and 4K with DirectX12. It only made up for AMDs terrible DirectX 11 drivers.
              That all depends on who's benchmarks you're looking at. TechReport used a 980 Ti that was already overclocked by the card manufacturer. Extremetech showed that the Fury X was actually faster. This is based on Fable Legends. Though regardless overall the Fury cards are not worth buying because the 980 Ti is just a way better buy. Also the 980 Ti and the Fury X are $650 cards, which most of us wouldn't buy. Cards like the 970 and the 390 are what matter and in those tests the 390 is much faster. After all the 980 is only 10% faster than the 970, and the 980 Ti is only 10% faster than the 980. Not exactly best bang for the buck.

              The only reason to buy a Fury is the Fury Nano, and that's because you're building a very small PC. But that's still not cheap at $650.

              Comment


              • #47
                AMD, please give me Freesync for Linux. I want to buy a new monitor and one of your 2016 cards.

                Thanks,
                mibo

                Comment


                • #48
                  Originally posted by sabun View Post

                  Considering the drop from 22nm (Haswell) to 14nm (Skylake), the power consumption is still considerably higher.
                  http://techreport.com/review/28751/i...sor-reviewed/5

                  At the end of the review, he measures something I do not often see other benchmarkers measure which feels very much like he's trying to protect the Intel brandname (Task energy he calls it). Regardless, even with his attempted endnote trying to save the i7-6700's reputation, it still showcases that the drop from 22nm to 14nm did nothing for power consumption and TDP (if anything it worsened things).

                  I'm not pulling my statement out of thin air. Intel haven't made their 14nm fabrication efficient enough considering the die-size drop, and it can happen with AMD's Polaris as well is the point I'm trying to get through here.

                  Please do not hype yourselves and expect beautiful numbers with the Linux driver when it does release. I hope for the best, but prepare for the worst always.
                  Quad cores: Haswell (non-K) is 84 watts, Broadwell is 65 watts, Skylake (non-K) is 65 watts.
                  Last edited by atomsymbol; 04 January 2016, 07:01 PM.

                  Comment


                  • #49
                    Originally posted by middy
                    going to be interesting to see how much of a gpu performance increase they achieve. they keep talking about performance per watt, but nothing about raw gpu performance.

                    cool they lowered their top dog from 325 watts down to 150 watts, but only offers a 20% increase in performance that's not really enough of a boost to drop another $600 on a new video card. or they can offer a 90 watt card with the performance of a previous gen 150 watt card in the same price range.

                    same goes with nvidia. nvidia so far has mostly talked about the performance increase in the memory department and the performance increase in the "professional graphics / computing department" with the aid of hbm and nvlink. nothing about pure gpu performance.

                    which is interesting because this is the first time i can recall that released and leak information hasn't really been about pure gpu performance.
                    Oh well, it is not like we need more raw performance with PS4 and Xbox one doing an excellent job stopping game development at a 2013 mid end level.
                    And everone who does real pro stuff can probably distribute the workload and add two more GPUs that need less power.

                    Comment


                    • #50
                      Originally posted by blackout23 View Post
                      Unfortunately OpenGL driver quality will still matter for a long time even with Vulkan.

                      The 'so called' NEW revolutionary AMD drivers (the semi-closed ones yet to be out) we are meant to be getting once Vulkan API comes out, will wrap OpenGL through the Vulkan drivers for better performance. I will believe this when it happens however!

                      Comment

                      Working...
                      X