Announcement

Collapse
No announcement yet.

Windows 10 vs. Ubuntu Linux Gaming Performance With NVIDIA GeForce GTX 1060/1080

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    The list could have used some more native ports, but I guess there aren't all that many apart from Valve and Indie games, neither of which tend to have built in benchmark modes. Maybe something similar to EuroGamer's Digital Foundry comparisons are the answer?

    Also, similar OpenGL performance isn't all that surprising when Nvidia's binary blobs share much of the important code base between Linux and Windows. Because of this it's thus pretty much the same code running on both OSs.

    Originally posted by indepe View Post
    You say that, kind of, as if you prefer it that way. What other solution do you see? Or do you think native OGL on Windows will become more popular?

    This is how I see it: GPU performance will continue to increase, so games will become fancier and require more CPU support. However, single-core performance increases in new CPUs are slowing down a lot, so at some point game(-engine) producers will develop multi-core strategies, meaning that OGL will make way for something else. And especially from the Linux point of view, what would be better than Vulkan?
    The way I see it is that games are going to continue getting more and more expensive to make, after all they've never stopped becoming more and more expensive to develop, and as a result developers are going to start looking much harder at reuse of everything they make, including technical components. DX11 shader code can obviously only be used on Windows and XB1 while Vulkan and specially OpenGL code can be run on a wide array of platforms, including mobile ones.

    Because of the costs of game development continuing to spiral out of control, the use of Vulkan and OpenGL may very well become a very sensible idea purely out of a financial perspective.
    Last edited by L_A_G; 14 February 2017, 07:35 AM.

    Comment


    • #72
      And i forgot to comment on second point there:

      Originally posted by bridgman View Post
      2. There is a decent chance that DX12 games will get ported to Linux using Vulkan as target rather than OGL, this won't happen immediately though, since any game supporting DX11 and DX12 will probably get ported onto OGL simply because the DX11/Windows->OGL/Linux frameworks & expertise already exist
      I think the the same and opposite, but lets clearfied things as according to current situation decent chance might be zero too ... so OK i would say currently there are 2/3 chance instead of decent, becuase 1/3 of DX12 games are Microsoft Studio and exclusive DX12 titles

      Also all other non Microsoft Studio games has DX12 renderer just as alternative... so there is more than equal chance how porters might just pick DX11 renderer and do it in OpenGL instead.

      Basically only with Microsoft Studio titles claim is clear, but i don't expect Microsoft will do or to allow any porting of their own titles to Linux
      Last edited by dungeon; 14 February 2017, 07:41 AM.

      Comment


      • #73
        Originally posted by gamerk2 View Post

        Absolutely.

        The Linux scheduler is designed around low latency; within a certain timeframe, every thread in the system is guaranteed to run. Great for making sure your system clock is up to date to the ns, not so great for your application when it's main rendering thread gets bumped. That in turn causes the main application thread to grind to a halt, costing you performance loss that can be measured by the ms.

        Throw in the design choice where threads will rarely bump cores, even if another core is doing little to no work, and I can easily see the scheduler costing 25%+ performance loss for applications that need a disproportionate amount of CPU time.

        I'll keep repeating myself: The primary problem here is the scheduler, not poor porting, and not the DX-to-OGL translation layer. The Linux scheduler is designed to handle lots of little threads [light desktop/server workloads], but is woefully inefficient for gaming.
        That's an interesting theory, but it needs to be proven by real metrics on threads owning draw calls for example.

        Comment


        • #74
          Mankind divided is horribly CPU bound just like Shadow of Mordor. I see a ~3-4 FPS difference between highest and lowest graphics settings.

          At the current rate, they will probably run better in WINE than native in 6-8 months.

          Comment


          • #75
            Originally posted by gamerk2 View Post

            Absolutely.

            The Linux scheduler is designed around low latency; within a certain timeframe, every thread in the system is guaranteed to run. Great for making sure your system clock is up to date to the ns, not so great for your application when it's main rendering thread gets bumped. That in turn causes the main application thread to grind to a halt, costing you performance loss that can be measured by the ms.

            Throw in the design choice where threads will rarely bump cores, even if another core is doing little to no work, and I can easily see the scheduler costing 25%+ performance loss for applications that need a disproportionate amount of CPU time.

            I'll keep repeating myself: The primary problem here is the scheduler, not poor porting, and not the DX-to-OGL translation layer. The Linux scheduler is designed to handle lots of little threads [light desktop/server workloads], but is woefully inefficient for gaming.
            But that's wrong, our applications at my work(albeit not games but visual simulation) run better on Linux than Windows. We only use OpenGL FWIW.

            Bad ports.

            Additionally, under this theory you'd see major FPS changes by switching your CPU scheduler to something like BFS and you don't.
            Last edited by peppercats; 14 February 2017, 10:08 AM.

            Comment


            • #76
              So, I think it's highly telling that 8 pages of crap about nVidia's driver just goes to prove once again beyond any doubt that game specific optimizations in graphics drivers is exactly the problem. Take Civ6 as a perfect example of an incomplete port that got released because the game devs knew full well nVidia would profile it and release a driver which implements various things the port doesn't even attempt to try.

              Which basically leaves the game screwed on everything else. I wish nVidia would just die already so that game developers would -finally- learn how to debug and complete their own work ffs.

              Comment


              • #77
                Originally posted by peppercats View Post

                But that's wrong, our applications at my work(albeit not games but visual simulation) run better on Linux than Windows. We only use OpenGL FWIW.

                Bad ports.

                Additionally, under this theory you'd see major FPS changes by switching your CPU scheduler to something like BFS and you don't.
                Unigine benchmarks in OpenGL mode score the same on both Linux and Windows. Michael's test on GTX 1080 is actually the biggest difference I've ever seen between these two OSes.
                So yes, the evidence in pretty strong the Linux scheduler has no (negative) influence on 3D performance.

                Comment


                • #78
                  davidbepo

                  I've accidentally black listed you. There are enough D3D12 tiles where NVIDIA wins by a huge margin. And I've never been an NVIDIA fanboy.

                  You may also go fuck yourself with such an attitude.

                  Originally posted by liam View Post

                  I understand that poster as saying "developers spend more time optimizing for the direct3d target".
                  All the more reason to take the burden off driver developers and put it in the hands of the game engine developers. Having to deal with two moving targets (thick, buggy drivers and buggy games) is pretty ridiculous.
                  He never mentioned that. His quote exactly "directx games are more optimized". Nothing else. What a load of BS.
                  Last edited by birdie; 14 February 2017, 11:02 AM.

                  Comment


                  • #79
                    Originally posted by smitty3268 View Post

                    It's nothing to do with the driver, it's the app being ported.

                    For an example, one thing recently mentioned on the mesa-dev list was the threading models in these games. DX11 allows any thread to submit API calls, while OpenGL requires each thread to bind to the current context first - a very slow operation. So part of Feral's process of porting a game is to create a separate thread that's just in charge of calling into GL, and the rest of the game calls into that - but that means that there is necessarily going to be added overhead/synchronization locking that the game doesn't have on DX11 and that can easily show up as a "CPU cap" in the frame rate. I'm sure it could all be handled better if they had years to re-architect the engine to work better with OpenGL, but since there's a limited budget and they have to get these ports out quickly it is what it is.

                    These kinds of issues where the APIs don't map cleanly to one another is the main problem with the performance in these games right now. One of the main reasons to hope for Vulkan is that it really should be a much better map towards the DX12 API, and so presumably a lot of these porting issues (and therefore performance issues) should go away once those DX12->Vulkan ports start happening. A DX11->Vulkan port, however, is still likely to have many of the same issues.
                    Why would a DX11->Vulkan port have the same issues, if a DX12->Vulkan port doesn't?

                    Especially if the issues (DX11->OGL) are caused by DX11 multi-threaded submits, I'd think a DX11->Vulkan port should not have that problem, but in any case, why should a DX12->Vulkan port be able to fix issues that a DX11->Vulkan port isn't able to ?

                    Comment


                    • #80
                      Found this on the reddit thread about this post, someone might be interested.

                      I'm not sure what's going on with the Mankind Divided benchmark. I did the benchmark with my GTX 970 and compared it to this windows one I found and got almost the same results?
                      Michael's windows results are far beyond what's seen in the benchmark I linked FWIW. His Linux results are in line with the (linked above) Windows' results.
                      Off the top of my head — the windows benchmark I linked is a bit old. Maybe the Linux build is based on an older build?

                      From a quick look, it appears true. Their 1080p high benchmark ran a lot slower for the same graphics card despite using what I assume is probably a faster CPU(5960x vs 7700k)
                      Last edited by peppercats; 14 February 2017, 05:04 PM.

                      Comment

                      Working...
                      X