Announcement

Collapse
No announcement yet.

AMDGPU/RadeonSI Linux 4.10 + Mesa 17.1-dev vs. NVIDIA 378.09 Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by oooverclocker View Post
    But it's not easy to determine yet how many devs will jump on the Vulkan train and how much effort they will put in the optimization.
    well, most used desktop os is still win7 without dx12

    Comment


    • #52
      Originally posted by efikkan View Post
      What?
      No, the driver uses a shader cache to store compiled shaders. OpenGL is designed to compile GLSL every time. Using a shader cache only improves loading time (either in startup or loading of new levels ingame), no game loads and compiles shader programs per frame! (that would be the most stupid developer ever)
      Hmmm, what about on-demand shader compilation?
      That seemed somewhat common with UE4.

      Here's a quote from Marek:

      The support of the old-style shaders compiled on demand
      [...]
      The main part is compiled first. At draw time, the prolog and epilog, if they are needed, are compiled and all pieces of bytecode are combined.



      To me this seems to mean that not everything has to happen on loading.
      Last edited by geearf; 27 January 2017, 08:45 PM.

      Comment


      • #53
        Originally posted by pal666 View Post
        well, most used desktop os is still win7 without dx12
        I don't know if porting Vulkan to Win7 is an option, but in any case, for both AMD and NVidia, Vulkan appears to be the most effective way to get good performance on Linux.

        Comment


        • #54
          It's pretty simple: If Sony would offer full Vulkan support on the PS4 there would be a whole coalition against all proprietary stuff of MS and Apple. So Apple would have to give up on Metal and finally MS would see a remarkably worse support for the Xbox because that would be the only system that couldn't run Vulkan games.

          If this doesn't happen I am still very optimistic to see many well performing titles on Linux with nice graphics and continuously a bigger and bigger growth in Linux gaming but in this case I wouldn't say that Vulkan will break DX12 into pieces.
          From it's benefits it's clearly superior. But it's foreign to devs who only developped DX games in the past and they might not feel familiar with Vulkan very quickly.

          Edit: And on the other hand Valve pretty surely doesn't show this great support just because they love Linux - I'm sure they are planning something on the long term with AMDs new APUs so finally we shouldn't forget them on the list of those who could influence the whole market remarkably.
          Last edited by oooverclocker; 27 January 2017, 10:49 PM.

          Comment


          • #55
            After purging the mesa-opencl-icd on Debian Sid for 17.0-rc2 Blender now runs much more respectable. It's clearly using Ellesmere from the AMDGPU-Pro 16.60 stack. The Blender Benchmarks on the RX-480 Results for the GPU only renders:

            BMW: GPU Results (3 runs 5:48.70)
            Pavilion Barcelona GPU: (3 runs 20:19.25)
            Classroom GPU: (3 runs 14:52.85)
            Fishy Cat GPU: (3 runs 9:18.14)
            Koro GPU: (3 runs 39:02.31)

            Comment


            • #56
              Originally posted by geearf View Post

              Hmmm, what about on-demand shader compilation?
              That seemed somewhat common with UE4.

              Here's a quote from Marek:





              To me this seems to mean that not everything has to happen on loading.
              It's somewhat common to see games compile shaders on demand. That doesn't mean they get recompiled every single frame, like efikkan was trying to argue, but rather that they happen at random times and cause game stutters.

              Usually it's tied to a new element entering the game. Walk around a corner and spot a shiny new gun = shaders associated with the gun are compiled. See a big explosion blow up a building = new shaders for the explosion are compiled on the fly. Playing dota and a unit spawns = new shaders associated with that unit are compiled. Etc...

              Of course, the best recommended practice has always been to pre-compile everything on level load, so that you don't have those random stutters. But major (UE) game engines are notorious for completely ignoring that practice and doing tons of shader compiles during the game.

              And that's just at the engine level - the driver itself may need to recompile certain shaders just due to the way the hardware works. Changing GL state may make certain shaders invalidate and force a recompile. That was actually the main purpose of the prolog/epilog work Marek did with radeonsi - the main shader can remain the same and only little bits need to change based on changing GL state. That's pretty hardware specific, i believe, and other hardware might be better aligned with the OpenGL spec and not need so many recompiles.

              Comment


              • #57
                Originally posted by artivision View Post

                This is a very old assumption that you make. Vendors have answer this before: All those things that you talking about have their numbers fixed exactly as needed by shader power. My point is "don't by the GTX1060", it's Nuked because of cheating.
                While it's true that the vendors tend to adjust the cards to have a balanced performance profile, you're missing something rather obvious.

                That kind of balance is different on each different workload. Tune it for 1 game, and you get completely different behaviour on another. Which means this will always be a balancing act. The best possible outcome is that in some games you are limited by shader power, others are limited by VRAM, ROP, on-chip cache, etc.

                Further, that kind of balancing isn't always done on all chips. Often the cutdown chips are cutdown in certain areas precisely because it makes them cheaper to produce, not necessarily to keep the perfect performance balance. Perhaps the 1060 has flops cut in half because that's the most expensive part of the card, but other pieces were left more powerful precisely to get that > 50% performance while still being cheap, or they just wanted to reuse a higher perf part of the card without having to redesign and revalidate it all.
                Last edited by smitty3268; 28 January 2017, 02:44 AM.

                Comment


                • #58
                  Originally posted by smitty3268 View Post

                  It's somewhat common to see games compile shaders on demand. That doesn't mean they get recompiled every single frame, like efikkan was trying to argue, but rather that they happen at random times and cause game stutters.
                  I don't think anyone suggested that they were compiled every frame, I'm not sure where (s)he got that.

                  As for the rest, we agree

                  Originally posted by smitty3268 View Post
                  Of course, the best recommended practice has always been to pre-compile everything on level load, so that you don't have those random stutters.
                  Is there another con than loading time for that? Maybe hogging resources or something alike?
                  Last edited by geearf; 28 January 2017, 04:12 AM.

                  Comment


                  • #59
                    Easy folks, take a nip of Picards earl grey tea and be rational for a moment:
                    First, @oooverclocker hit some pretty good points regarding stability and support and he is right. If you want and out-of-the-box working driver that supports most games without major issues, AMDGPU is pretty good. It also gets a lot of bugs fixed and the state of wine combined with gallium-nine yields very good performance. People who are interested in DX9 games that won't get major updates anymore (because no active support) have to rely on that.

                    The next thing and probably the one people care more about is the performance in current and future games. But this comes with a bridge to be build: Most games are NOT optimized to work correctly. Nvidia gets away with their "cheats" because they work and a lot of people are okay with that - and to be honest: I don't see a major issue here if people are fine with it. Corrupted rendering that is not noticeable is like jpeg, it works. Now, if we look towards Vulkan, where many devs suddenly notice that the "short and simple path" works on Nvidia but fails on AMD-GPUs, they get the pain in the rear, as they have now to code the game "correctly" - and suddenly, the difference between AMD and Nvidia turns the other way around. Coding for DX gave many Devs a convenient tool for decades, but this convenience also created a lot of overhead that has now to be solved the hard and painful way.

                    And wait and behold, if done correctly, the GPU performance scales with their raw power and AMD takes the lead. And this is, where the dealbreaker for Nvidia is: If a game is coded correctly, with actual tech and equivalent optimizations (I know, only few studios have the resources to do so), horsepower counts which is simply higher for AMD cards compared to Nvidia for the same price. But: Right now, there are only a few games, that do things correct - so the "mah FPS are higher than yours" argument still stands and will stand for quite some time, until Vulkan becomes the standard-API.

                    Comment


                    • #60
                      What kind of guy insults other people, because they are using NVIDIA cards?! O.o – I mean, that are just graphic cards!

                      Comment

                      Working...
                      X