Announcement

Collapse
No announcement yet.

September 2018 Drivers: The Current Linux Performance & Perf-Per-Watt From NVIDIA Kepler To Pascal vs. AMD

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by theriddick View Post
    Vega cards are certainly capable of performing, even faster then 1080ti in some places, but are being let down by either drivers or absolutely no optimizations done in games.

    Is a shame, maybe one day, like in 2 years time Vega64 for example will outpace the 1080ti in more tests, but we really aren't going to be caring by then!
    Some older AMD cards have seen improvements and driver optimizations way after their introduction, that is true. One could be cynical and argue that the original driver just wasn't finished and it took them a long time to enable all hardware capabilities. Whatever your take on it might be, the decision to purchase something is done "as is" and therefore any improvements later on are welcome.

    In Vega's case, however, I recall someone saying that it was not necessarily about lacking optimization. Rather, when designing a new GPU, it is always a gamble to decide on the ratio between fixed function hardware and compute performance. This might explain why Vega with all its raw compute performance is slower than a GTX1080 in many applications. If fixed function hardware is the bottleneck, no driver will significantly change it. Also, keep in mind that AMD designed only one GPU for high end gaming, HPC, machine learning etc. unlike Nvidia. It might be a compromise.

    Comment


    • #12
      Originally posted by GruenSein View Post

      Some older AMD cards have seen improvements and driver optimizations way after their introduction, that is true. One could be cynical and argue that the original driver just wasn't finished and it took them a long time to enable all hardware capabilities. Whatever your take on it might be, the decision to purchase something is done "as is" and therefore any improvements later on are welcome.

      In Vega's case, however, I recall someone saying that it was not necessarily about lacking optimization. Rather, when designing a new GPU, it is always a gamble to decide on the ratio between fixed function hardware and compute performance. This might explain why Vega with all its raw compute performance is slower than a GTX1080 in many applications. If fixed function hardware is the bottleneck, no driver will significantly change it. Also, keep in mind that AMD designed only one GPU for high end gaming, HPC, machine learning etc. unlike Nvidia. It might be a compromise.
      1) The good thing about "AMD Finewine(tm)", is that typically when you buy a gpu the price tends to be based on current performance, not on potential performance. People who bought 7xxx series AMD cards in 2012-3 paid money for X levels of performance, then improved drivers brought that performance to X*1.4 level of performance, for free... Obviously later AMD just rebranded 7k series cards in order to sell those for more money with the improved drivers.... But still, incomplete driver or not, people only paid for performance at the time of purchase, didn't pay more for potential improvements. And that was a win for the consumer, and a loss for AMD, financially.

      2) VEGA is a very good architecture from what i have seen. A very forward looking architecture, typical of AMD. AMD for as long as a i can remember, always designs hardware that is ahead of its time, and because they have no real marketshare power, they can't exploit said hardware until Nvidia catches up and developers begin to exploit it... The main issue we face in PC gaming is that Nvidia is holding all the cards with its vast markeshare, and thus they can force developers to use whatever API they want, whatever framework they want, whatever graphical technologies they want. The competition aside from the occasional TressFX etc, can rarely exploit its capabilities until Nvidia decides it is time to "innovate" and adds support for the same features but with a different brandname...

      Want to remember why D3D 10.1 failed, despite being better? Because Nvidia didn't support it, and by the time it did, D3D11 was arriving. Remember how Assassin's creed on D3D 10.1 vastly outperformed Nvidia's D3D 10.0 and thus it was removed because it supposedly was "buggy"? I never encountered any bugs on my HD 3870, but i suppose an "Nvidia the way's its meant to be played" game wasn't supposed to be better on AMD, so...

      Perhaps i need to remind you for how long Nvidia snubbed tesselation despite AMD having a hardware tesselator since the early 2000s on their chips? Even when D3D11 originally was released, games didn't really support it until Nvidia implemented tesselation on shaders and suddenly every game on the planet, even 2D games (ok i exaggerate a little), had 64X tesselation on everything including underground non-visible rivers, in order to choke AMD's fixed function tesselator...

      D3D 12 and Vulkan has been around for many years, and instead of developers embracing those to bring the cost of PC gaming down, they stick to D3D11 because that is what Nvidia is good with.

      VEGA released and had many capabilities that went unused by games and lo and behold, more than a year later Nvidia introduces a similar architecture coupled with their CUDA based framework and suddenly it is the best thing since sliced bread. Would you like to bet that eventually as stuff like FP16, hardware async, and raytracing, become more prevalent, VEGA will become vastly more competitive than 1080ti?

      Comment


      • #13
        Originally posted by TemplarGR View Post
        2) VEGA is a very good architecture from what i have seen. A very forward looking architecture, typical of AMD.
        Yet "Dead on arrival" for many consumers due to bad price/performance ratio. And what does Vega brings that Polaris or Fiji Fury does not? There was some hype about new technologies but most if not all got cut short on launch. Right now I have R9 Fury and if I want to keep FreeSync support I can upgrade to Vega 64 for at best double R9 Fury price (used Vega 64) or almost 3x R9 Fury price (new Vega 64) for very little gains (not to mention that used GTX 1080Ti prices are similar to Vega 64)... and with Poiaris launch Fury cards were discounted hard so I got new R9 Fury at RX480 price.

        Originally posted by TemplarGR View Post
        until Nvidia decides it is time to "innovate" and adds support for the same features but with a different brandname...
        Right now innovation ball is on the Nvidia side with RTX features. AMD is looking pretty poor right now and it will be some time untill we see Navi and even more for a post-GCN arch after that. Focusing their effort on console GPU may have been a good business decision but for PC end consumers that ended up with not so competitive products.

        Originally posted by TemplarGR View Post
        VEGA released and had many capabilities that went unused by games and lo and behold, more than a year later Nvidia introduces a similar architecture coupled with their CUDA based framework and suddenly it is the best thing since sliced bread. Would you like to bet that eventually as stuff like FP16, hardware async, and raytracing, become more prevalent, VEGA will become vastly more competitive than 1080ti?
        Vega is screwed hard. It draws power like crazy, runs hot and has no OC potential. Likely due to fact they used a low clock low power node for it and any attempt at running above that node optimal frequencies ends up with what Vega is today.

        Sorry but I don't see a bright future for Vega. More likely it will happen for for Polaris.

        Comment


        • #14
          The issue AMD have with their designs is that they are never wide enough, so they end up pushing the clocks beyond the ideal point on the power curve because Nvidia also don't stand still. Polaris 10 was one such example.

          If AMD had added even 256 more shaders to Polaris 10, they could have been far more competitive on power consumption (at the same performance) and avoided that issue with PCIe power draw. Still, time has passed and AMD has had plenty of time to fix this and create a wider mid-range GPU - be it Polaris 30 or a mid-high-range Vega. But no. Nothing for this year apart from a HPC Vega 20 card.

          The same goes for Vega, although there is also that potential issue that GCN cannot address more than 4096 shaders in a single GPU. Again, AMD would have known about this since Fiji, launched three years ago and designed 5 years ago. 5 years to fix this issue, but no. Vega is max 4096 shaders still.

          The 2000 series from Nvidia is going to destroy whatever AMD has left to offer. A disaster for consumers, as competition is key to a healthy market.

          I truly hope that AMD have simply switched to silent-mode and actually have something to release in the next couple of months besides Vega 20. A 'Vega 21' - consumer 12nm 3072 shader card at 1.5GHz could do fairly well at the $299 point IMO, as the 2000 series are priced to the max due to the lack of competition. But .... yeah, right, i'm not holding my breath.

          Comment


          • #15
            Originally posted by TemplarGR View Post
            2) VEGA is a very good architecture from what i have seen. A very forward looking architecture, typical of AMD. AMD for as long as a i can remember, always designs hardware that is ahead of its time, and because they have no real marketshare power, they can't exploit said hardware until Nvidia catches up and developers begin to exploit it... The main issue we face in PC gaming is that Nvidia is holding all the cards with its vast markeshare, and thus they can force developers to use whatever API they want, whatever framework they want, whatever graphical technologies they want. The competition aside from the occasional TressFX etc, can rarely exploit its capabilities until Nvidia decides it is time to "innovate" and adds support for the same features but with a different brandname...
            Typically you are correct about AMD gpus.
            But for gaming applications Vega turned out to be too little too late. They launched so much later than Nvidia and yet had no GTX 1080ti competitor. In fact they were clocking their cards to the limit sacrificing perf/watt in order to make the Vega64 into a GTX 1080 competitor, when they should have been leapfrogging Nvidia (considering the launch timeframes). It was a let down 3 years after Fury X, this was supposed to be the architecture that allowed AMD to shine again in the high end.

            Post release too even on Windows people are disappointed that the performance hasn't improved in the last year via drivers (unlike previous AMD graphics architectures). Execution was also poor with a lack of supply and cards not available at MSRP. From a money making stand point AMD is lucky that the mining craze hit allowing them to sell Vega in good quantities.

            Raja Koduri too made his exit with the release of Vega.

            https://www.forbes.com/sites/jasonev.../#7dd1bc3824fd

            There is a lot riding on Navi.
            Last edited by humbug; 12 September 2018, 09:28 AM.

            Comment


            • #16
              Originally posted by humbug View Post
              Execution was also poor with a lack of supply and cards not available at MSRP. From a money making stand point AMD is lucky that the mining craze hit allowing them to sell Vega in good quantities.
              Those are not two separate statements. The lack of supply and lack of availability at MSRP was a direct consequence of the mining craze.

              We were not the ones hiking prices or reducing supply... but unfortunately fab time is booked a long time in advance so we had relatively little wiggle room for increasing supply.
              Test signature

              Comment


              • #17
                Originally posted by theriddick View Post
                Vega cards are certainly capable of performing, even faster then 1080ti in some places, but are being let down by either drivers or absolutely no optimizations done in games.
                One point no one has made yet that I think is really quite remarkable here - this test was using the open source AMDGPU drivers vs. NVidia's proprietary closed source driver. It wasn't so long ago that the idea of competitive open source GPU drivers seemed like a pipe dream. From a casual gaming perspective, does it really matter if AMD on open source drivers gets 130 fps while NVidia closed source gets 150 fps? Your eyes cannot even perceive this as a difference. There really is no reason that any self-respecting Linux geek should choose nvidia for a new build here in 2018.

                Comment


                • #18
                  That post summarized very good points as the MESA matured to get support thanks to the majority of open source contributors including Feral Interactive and Valve.
                  What is left are optimization.
                  Phoronix: September 2018 Drivers: The Current Linux Performance & Perf-Per-Watt From NVIDIA Kepler To Pascal vs. AMD There is one week to go until NVIDIA

                  Comment


                  • #19
                    Originally posted by sykobee View Post
                    The issue AMD have with their designs is that they are never wide enough, so they end up pushing the clocks beyond the ideal point on the power curve because Nvidia also don't stand still. Polaris 10 was one such example.

                    ...
                    Not sure if that is entirely true. For some reason, Nvidia manages to outperform Vega with a much narrower design (A GTX 1080 has 2560 shaders vs 4096 on Vega, correct?). They make up for it by higher frequencies at significantly lower power, so this actually seems to work. But whenever clock rate and shader counts are discussed, I feel like this cannot the entire story. If this was actually the main reason, we'd simply look at the theoretical compute throughput, which AMD dominates. I can't shake the feeling, that many of these shaders run unused a large fraction of the time. Maybe, there are other issues which lead to problems to keep them fed. That is an architectural issue.

                    The other issue is power. I think, we can all agree that Vega was released at frequencies beyond reason to reach GTX 1080 performance levels. IMHO, that was a decision that had nothing to do with Vega itself but rather with the market situation. If you underclock Vega, its power usage turns a lot more modest and the lost performance is relatively little. It is just run way beyond its sweet spot.

                    Comment

                    Working...
                    X