Announcement

Collapse
No announcement yet.

AMDVLK vs. RADV vs. AMDGPU-PRO 17.50 Vulkan Performance

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by bridgman View Post

    Are you talking about a vague sense of dissatisfaction (like wishing we had Intel's R&D budget), or are there specific things you think we should have done differently ?

    https://www.reddit.com/r/Amd/comment...ia_rd_budgets/
    I'm not comparing you to Intel, but more to Nvidia. They are closed source and horribly broken on some fronts but are able to deliver consistent and performing drivers cross-platform for years now.

    I'd expect something like that from AMD. It was never the case, closed or open source. You seem to be getting there now tho, and with the OSS path to boot which is great. If it's any consolation (with regards to my continuous rants ) I bit the bullet and bought a 100% AMD gaming rig to support this even if I'm getting some performance/compatibility issues in linux with it.

    Comment


    • #72
      Originally posted by Almindor View Post
      If it's any consolation (with regards to my continuous rants ) I bit the bullet and bought a 100% AMD gaming rig to support this even if I'm getting some performance/compatibility issues in linux with it.
      Ahh, OK... in that case you can rant all you like and we'll keep listening
      Last edited by bridgman; 12-26-2017, 09:38 PM.

      Comment


      • #73
        I also want to chime into this discussion.

        First of all thanks to all the Devs pushing the tech forward. Any kind of rant coming from my side shall be taken from the perspective of making things better. With that out of the way:

        AMDs cards have a massive amount of horsepower which lays open to be utilized. This horsepower (taking the Vega 64) beats the 1080 Ti. Now the Vega 64 could be pushed from the 12,5 TFlops to something ~14 if you tweak enough (undervolting+overclocking). A lot of chips hit the 1700 MHz core and ~1150+ MHz HBM2 which puts the raw power of the card gently above the 1080 Ti.

        Vulkan is the API in which we try to utilize that horsepower. Knowing, that most engines are not optimized for this API yet (and probably won't until 2020), we can't expect most games to profit from Vulkan. But thats okay, IF Vulkan reaches at least Windows-DX11 performance. This is sadly not yet the case.

        Another point is the comparison to Nvidia. When both the RX Vega 64 and the 1080 Ti run a Vulkan based game, I'd expect similar performance from both - even under unoptimized games. Of course most engines are still written towards the DX9/11 base with a Vulkan wrapper around it, meaning that most engines and rendering paths are - in terms of resources - optimized for Nvidia pipelines. But this is still no excuse for a (up to) 50% performance gap - esp. when AMD wants to get customers with the advertizing of Linux+Vulkan.

        I guess I speak for all of the OS-community when I say thank you AMD for opening up those drivers. This helps you and well as us to build better drivers and will in the end benefit everyone. But there is still quite a long road ahead for Vulkan. I just hope, AMD-Vulkan-drivers (whichever we take) is in a competitive state when the first optimized engines will surface. The hype for Linux gaming is real and a lot of people are just waiting to change their OS.

        Last but not least: Please AMD, if you release the next gen Navi GPUs, please get rid of the fixed un-flashable VBIOS. This (and other) shenanigans hurt you, the community and till today, I have yet to see the benefit of locking the VBIOS, when it gave you a real selling point when it was still unlocked/flashable. I can imagine that developing a brand new GPU arch in such a short time is extremely exhausting and a couple of things may be overlooked here and there. But locking the VBIOS was a bad idea - just saying.

        Comment


        • #74
          Originally posted by Shevchen View Post
          Another point is the comparison to Nvidia. When both the RX Vega 64 and the 1080 Ti run a Vulkan based game, I'd expect similar performance from both - even under unoptimized games. Of course most engines are still written towards the DX9/11 base with a Vulkan wrapper around it, meaning that most engines and rendering paths are - in terms of resources - optimized for Nvidia pipelines. But this is still no excuse for a (up to) 50% performance gap - esp. when AMD wants to get customers with the advertizing of Linux+Vulkan.
          I am actually responding to most of your post, but only quoting one section because it best reflects a common theme in your post. If you look at one synthetic benchmark (compute performance in this case) and extrapolate FPS based only on that metric you are not going to get useful results. Going one step further, if you then assume drivers are responsible for the performance gap between your extrapolated FPS and real observations you may be setting unrealistic expectations.

          As a starting point, please look at the results of other synthetic benchmarks (one example below). The general pattern you will see over generations is that AMD tends to lean a bit more towards compute performance while NVidia leans a bit more towards fixed-function performance, specifically fill-rate and tessellation. You need to consider all of the synthetics at minimum, along with their expected contribution to the performance of real world applications, both now and in the future (the industry is evolving from fixed function to compute shaders over time, but only as newer apps replace older apps in benchmarking suites.

          We do have more work to do in order to close the gap between AMD and NVidia in Vulkan performance, but unless the game has been written almost totally around compute shaders it is IMO unrealistic to expect FPS to eventually track only compute performance. Each game's performance will be a function of compute performance plus the performance of a number of fixed-function blocks. I don't know if the Anandtech synthetics give you a complete representation of those fixed-function blocks but they are the best I could think of at the moment.

          Relative performance under Windows is probably a better guide for driver expectations than raw compute performance, and the results will be very dependent on design decisions made for each game engine & game. Porting framework will also be a factor for Linux, unfortunately, and we are seeing Linux games hit the CPU-bound point relatively sooner than they do under Windows.

          I hear you re: VBIOS locking. As I understand it, the reason for locking is that with each new generation of HW expectations for power efficiency go up and more aspects of power control inside the chip are made programmable. As a result it becomes easier to "let the smoke out" if the power settings are not coordinated correctly and warranty claims go up as a result - without there being any actual fault in the hardware. I don't know if anyone has a good solution for that yet.
          Last edited by bridgman; 12-27-2017, 03:23 PM.

          Comment


          • #75

            Thank you for your reply bridgman. Some bits and pieces:
            Originally posted by bridgman View Post
            if you then assume drivers are responsible for the performance gap between your extrapolated FPS and real observations you may be setting unrealistic expectations.
            Seeing Star Citizens engine running butter smooth on my Vega 64 (while still DX11) I'm mostly healed from the "its only the driver" concept of performance issues. By now, it is the engine from my point of view an the driver being a gap multiplier. This is also why I'm surprised to see such huge variances in certain scenes/levels of games. Adored TV made a very nice analysis on the differences of scenes on Tomb Raider - so yeah, it seems we fix one problem to replace it with another.

            The general pattern you will see over generations is that AMD tends to lean a bit more towards compute performance while NVidia leans a bit more towards fixed-function performance, specifically fill-rate and tessellation.
            Hmm... my impression of the Vega arch was, that it closes this gap a little looking at the arch-overview from several tech channels. Is this a problem of still yet to be implemented features being utilized by Vega in the driver, or do we just have to wait until engines will "adapt"?

            We do have more work to do in order to close the gap between AMD and NVidia in Vulkan performance, but unless the game has been written almost totally around compute shaders it is IMO unrealistic to expect FPS to eventually track only compute performance. Each game's performance will be a function of compute performance plus the performance of a number of fixed-function blocks. I don't know if the Anandtech synthetics give you a complete representation of those fixed-function blocks but they are the best I could think of at the moment.
            I see. I guess I was blinded a little by the raw compute numbers then. My naive little brain tells me "converd the fixed functions to compute functions then", but I guess thats not how it works.

            Relative performance under Windows is probably a better guide for driver expectations than raw compute performance, and the results will be very dependent on design decisions made for each game engine & game. Porting framework will also be a factor for Linux, unfortunately, and we are seeing Linux games hit the CPU-bound point relatively sooner than they do under Windows.
            Hmm - Windows vs. Windows is kinda fuzzy for me. If I take the OS vs. CS driver, compare those results and would flat out apply the difference as a multiplier to AMD vs. Nvidia, I'd expect AMD to beat Nvidia in several cases. I simply don't know what to expect.

            For the CPU bottleneck: Well, at least from my point of view, I see the CPU bottleneck not being such a problem for enthusiasts. A nice Ryzen with fast RAM comes extremely close to Intels IPC and can even take it over and for Ryzen II I'm expecting even better metrics. (sidenote: I plan to upgrade from my old Intel to Ryzen II - once DDR5 and PCIe 4 hit the fans) That aside, not everyone is a gaming enthusiasts - esp. on Linux, where people tend to try doing more with less. Only very few people have 32+ GB RAM, 8+ cores and a fat GPU under their roof.

            Hopefully, the GPU profiler you released will help to close some gaps and bottlenecks in game-dev - I can only hope devs will use it.

            I hear you re: VBIOS locking. As I understand it, the reason for locking is that with each new generation of HW expectations for power efficiency go up and more aspects of power control inside the chip are made programmable. As a result it becomes easier to "let the smoke out" if the power settings are not coordinated correctly and warranty claims go up as a result - without there being any actual fault in the hardware. I don't know if anyone has a good solution for that yet.
            "Do it at your own risk, flashing VBIOS with settings beyond recommended values will void warranty"
            I'd be fine with that. One VBIOS setting is a soft-BIOS with software limits for the people with a nervous heart, (still overclocking/undervolting possible within a certain margin) the second switch is the unlocked mode, where you can do whatever you want with the GPU. Once this mode is active, everything you do is at your own risk. Like a jumper.

            edit: Another source of performance difference is lower quality rendering on team green, which "cheats" some performance free. While only DX11 examples are present, I wouldn't wonder if its true on Linux too. Example: https://puu.sh/yPhdo/e9029f3cc5.jpg With this in mind, I want to stay "with a grain of salt" on the performance difference for now.
            Last edited by Shevchen; 12-28-2017, 06:02 AM.

            Comment

            Working...
            X