Announcement

Collapse
No announcement yet.

NVIDIA's Linux Driver Can Deliver Better OpenGL Performance Than Windows 8.1

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by sireangelus View Post
    Lol... Of course you can say that from your illuminate knowledge as an ogl and dx dev, or as someone as myself who not only has a laura (what you would call a phd, since i'm italian) in computer science but also have been keeping up the tabs on engines and api evolution across many years. How can someone read what is written in plain english and ignore it completely because "ogl is better " when there are DOCUMENTED FACTS is beyond my abilities to comprehend human behaviour.

    Do you belive that there has been no evolution and that the earth is 5k years old too ? It's the kind of reaction i would expect from a creationist.

    We might get there in ogl. It's just that we are not there yet, and to get there will take developers ogl 5 and many years still.
    We don't know enough about the implementation of Unigine to conclude that it's a valid comparison of the two APIs. Does it even use bindless graphics in OGL?

    Comment


    • #32
      Originally posted by johnc View Post
      We don't know enough about the implementation of Unigine to conclude that it's a valid comparison of the two APIs. Does it even use bindless graphics in OGL?
      All we know, all we need to know, is that it's a dx11 engine that was heavily advertised as being cross-platform, with enfasis on running on linux. You should expect that opengl pathway is as good as dx11. Most true cross-platform engine have all the code written in an abstracion level that get translated for the specific platform. Since opengl and egl is such a large part and play such importance within unigine, you would think that it's the part most optimized. Also, are you still not understanding that most dx vs opengl comparisons are made with dx9? that bloated piece of old crapcode that survived up until now cause of consoles, stupid people that would not upgrade from xp and lazy developers?
      Do you realize that when dx12 gets out ogl will be blown away on many titles for years to come? I'm not advertising dx. I'm complaining about how slow, crappy and completely out of market the ogl guys are, and the lack competitiveness for the api.

      Even carmack switched over to dx. Why? better tools, less overhead, more modern features, easier to develop for. Making it easier to debug and better tools make possible to optimize dx codepaths in ways that are just much more harder on ogl.
      Also, it's way easier to implement effects on dx10/11, with less code and less processing power.

      Comment


      • #33
        Originally posted by sireangelus View Post
        lol.. and you say that because you don't know that for a long time opengl has played catch up to dx11 to get feature parity, reached with 4.4.
        And there are a lot more effort to optimize dx11 drivers instead of ogl for obvious reasaons. Plus, overhead if not using the very latest ogl features ( batches and stuff) is less for dx11 than is for ogl4.4. you need to realize that up to now most of the ports where made from dx9 games. Dx9 compared to dx10/11 has a gigantic overhead, has outdated texture/normal map compression techniques, no tesselation and no geometry shaders. to be short, it's a lot slower. What made the transition terrible was that SH made straight code port from dx9 to dx10 just adding over a bloated code dx10 effects, making the engines heavier. There are many DOCUMENTED cases where a well made straight port from dx9 to dx10 showed 10% gains or more. And you can do a lot of effects in dx10/11 that would take many workarounds and ten/fifty times the lines of code with dx9, and this was showed on the frostbite engine, written from the ground up for dx10/11.

        Sorry Mr. PHD but you comparing apples with oranges. D3D11 isn't faster than D3D9 because both are programming languages. Shader-model-3 that you can use with D3D9 is slower than Models-4 and 5 that you can use with D3D11. Also the equation is FPS*quality and there D3D front-api-drivers are cheating in quality for some vendors. Here some results from 2 years: http://www.tomshardware.com/reviews/...w,3121-20.html From 1 year: http://www.g-truc.net/post-0547.html (intel is the same), nvidia loses, 2014 is probably a lot better.
        Last edited by artivision; 01 November 2014, 05:34 PM.

        Comment


        • #34
          Originally posted by artivision View Post
          Sorry Mr. PHD but you comparing apples with oranges. D3D11 isn't faster than D3D9 because both are programming languages. Shader-model-3 that you can use with D3D9 is slower than Models-4 and 5 that you can use with D3D11. Also the equation is FPS*quality and there D3D front-api-drivers are cheating in quality for some vendors. Here some results from 2 years: http://www.tomshardware.com/reviews/...w,3121-20.html From 1 year: http://www.g-truc.net/post-0547.html (intel is the same), nvidia loses, 2014 is probably a lot better.
          d3d9 and d3d 10 are a lot more than programming languages.
          They are the whole abstraction layer that defines how things are rendered from the code to the graphic card ,they are intrinsically defined by the model driver used,they define the gpu architecture, they define the threading method/ the ability to use many cores for rendering, and so on.
          The abstraction layer of dx10/11 is less then a tenth of dx9, that cut also the draw call overhead for each frame by that much at least, and with dx10/11 shaders you can use a lot effects that requires many more processing passages on dx9 instead of one, you can use more threads on the cpu for rendering instead of the single one you get on dx9( the other threads are always used for the rest of the engine- sound, IA, etc), you can use the same normal maps but with better compression algorithms so you get better performance out of them, etc. You can say that with ogl you can do that too, but that's exactly my point. I'm saying that dx11 is faster than dx9, and up until now we compared dx9 to opengl pathways, cause there are no true dx10 engines( up until now beside unigine) that have been ported over to ogl 4.4 available for testing. This is what caused valve's "wow" effect when tested against dx9. Cause ogl is faster than dx9, because it has less overhead even with a straight code port. That's the same "wow" effect that literally happened on world of warcraft, assasins creed(with 10.1) STO and many other games that recieved a straight port from dx9 to dx10 taking only the optimiziations without changing the quality.
          But at the same time, dx11 kills ogl for the moment. Even that link you gave me showed that on amd and intel hardware- hardware that does not benefict from the heavy optimization that nvidia does on his dx11 pathways(go look for shader cache). We can argue all we like, but we all now that nvidia drivers are the best when it comes on dxX optimizations. It just means that the other vendors do not optimize their dx codepath as hard as nvidia but maybe have a more balanced approach. (i own a 280x, and And the point i was trying to make, is that we should test dx11 vs ogl on windows vs ogl on linux. Because we need to pretend not only the same speed on opengl codepaths. We should pretend the same speed between dx11 and ogl codepaths.

          And ogl is falling behind again ,and fast. They realize their APIs are a mess, but they can't move fast enough as to catch up to dx12. It will be a slaughter for gaming on linux. The performance gap will be widened until we have ogl5, or if AMD decides to take out of his hat mantle( the only abstraction i belive can be truly multiplatform as it can be as simple as copy-past hsls dx11 code ) And that will take years still.
          Last edited by sireangelus; 01 November 2014, 06:12 PM.

          Comment


          • #35
            Originally posted by sireangelus View Post
            All we know, all we need to know, is that it's a dx11 engine that was heavily advertised as being cross-platform, with enfasis on running on linux. You should expect that opengl pathway is as good as dx11. Most true cross-platform engine have all the code written in an abstracion level that get translated for the specific platform. Since opengl and egl is such a large part and play such importance within unigine, you would think that it's the part most optimized. Also, are you still not understanding that most dx vs opengl comparisons are made with dx9? that bloated piece of old crapcode that survived up until now cause of consoles, stupid people that would not upgrade from xp and lazy developers?
            Do you realize that when dx12 gets out ogl will be blown away on many titles for years to come? I'm not advertising dx. I'm complaining about how slow, crappy and completely out of market the ogl guys are, and the lack competitiveness for the api.
            You keep saying all kinds of crap but you're not talking any facts.

            Even carmack switched over to dx. Why?
            To get on XBox? What engine did Carmack make that was DX only?

            Comment


            • #36
              Originally posted by sireangelus View Post
              d3d9 and d3d 10 are a lot more than programming languages.
              They are the whole abstraction layer that defines how things are rendered from the code to the graphic card ,they are intrinsically defined by the model driver used,they define the gpu architecture, they define the threading method/ the ability to use many cores for rendering, and so on.
              The abstraction layer of dx10/11 is less then a tenth of dx9, that cut also the draw call overhead for each frame by that much at least, and with dx10/11 shaders you can use a lot effects that requires many more processing passages on dx9 instead of one, you can use more threads on the cpu for rendering instead of the single one you get on dx9( the other threads are always used for the rest of the engine- sound, IA, etc), you can use the same normal maps but with better compression algorithms so you get better performance out of them, etc. You can say that with ogl you can do that too, but that's exactly my point. I'm saying that dx11 is faster than dx9, and up until now we compared dx9 to opengl pathways, cause there are no true dx10 engines( up until now beside unigine) that have been ported over to ogl 4.4 available for testing. This is what caused valve's "wow" effect when tested against dx9. Cause ogl is faster than dx9, because it has less overhead even with a straight code port. That's the same "wow" effect that literally happened on world of warcraft, assasins creed(with 10.1) STO and many other games that recieved a straight port from dx9 to dx10 taking only the optimiziations without changing the quality.
              But at the same time, dx11 kills ogl for the moment. Even that link you gave me showed that on amd and intel hardware- hardware that does not benefict from the heavy optimization that nvidia does on his dx11 pathways(go look for shader cache). We can argue all we like, but we all now that nvidia drivers are the best when it comes on dxX optimizations. It just means that the other vendors do not optimize their dx codepath as hard as nvidia but maybe have a more balanced approach. (i own a 280x, and And the point i was trying to make, is that we should test dx11 vs ogl on windows vs ogl on linux. Because we need to pretend not only the same speed on opengl codepaths. We should pretend the same speed between dx11 and ogl codepaths.

              And ogl is falling behind again ,and fast. They realize their APIs are a mess, but they can't move fast enough as to catch up to dx12. It will be a slaughter for gaming on linux. The performance gap will be widened until we have ogl5, or if AMD decides to take out of his hat mantle( the only abstraction i belive can be truly multiplatform as it can be as simple as copy-past hsls dx11 code ) And that will take years still.

              Again i said before and i agree that less API use on runtime equals less overhead and more FPS. BUT an API doesn't define how a GPU will render the graphics, that is the job of the low level hardware driver that is unified. The path is D3D_compiler+D3D_state_tracker=>IL-IR_assembly=>HW_compiler=>low_level_driver. D3D doesn't go lower than IL, doesn't have low level access, doesn't define the shader model architecture the opposite is true, doesn't contribute to execution and creation of graphics. Even compression is part of the shader_model and deeply implemented in hardware. OGL with D3D compatibility extensions has the same speed with D3D and with bindless graphics OGL is faster. Also those you call optimizations at high level (API level) don't exist. At this level you call them Hacks and Nvidia is a MAster at this.

              Comment


              • #37
                Originally posted by sireangelus View Post
                D


                Because we need to start comparing pure speed at the same visual quality, and dx11 on windows has an enormous advantage on that.
                On valley there are something like at least 200-300 points of difference between opengl and dx11 on windows, for the same visuals, same engine.
                We believed valve's story on "linux/opengl is faster" because he's engine used an outdated, depreciated technology that won't seem to die on windows: dx9.

                Dx10/11 is actually a lot faster than dx9, if you build an engine from the ground up for it.
                Comparing opengl to dx11 would show how much work nvidia/amd have to do in terms of opengl optimizations.
                Citations, please.

                Comment


                • #38
                  Originally posted by startzz View Post
                  Yes, they broke opengl performance for windows - there isnt much opengl games for windows, they are all very poorly optimized, and they kind of suck because of noob developers, and i dont play them, so its only what i read on nvidia forums, that some people are getting fps cut in half, like 50 instead of 100, or 200 instead of 400.
                  One important thing to keep in mind is that games are not the only type of applications using OpenGL.

                  Sure, the benchmarks are mostly done using game engines due to the lack of easy access to these other applications, but the numbers might still be a valuable input when deciding on which platform to deploy.
                  Though, of course, it might not be likely that anyone needing such information for a professional setup would read Phoronix

                  Cheers,
                  _

                  Comment


                  • #39
                    Originally posted by sireangelus View Post
                    4.4 of course
                    Valley does NOT run on GL 4.4 engine. 4.4 didn't even exist when Valley was released.

                    I'm pretty sure it has a 3.2 engine, right? With a few GL4 extensions like tessellation added on?

                    At the very, very least, you can be sure they wanted it to run on Macs, and Macs don't support GL4.4.

                    Comment


                    • #40
                      Originally posted by smitty3268 View Post
                      Valley does NOT run on GL 4.4 engine. 4.4 didn't even exist when Valley was released.

                      I'm pretty sure it has a 3.2 engine, right? With a few GL4 extensions like tessellation added on?

                      At the very, very least, you can be sure they wanted it to run on Macs, and Macs don't support GL4.4.
                      unigine heaven was already 4.0; They of course will have a fallback to 3.2 on macs.
                      Sometimes i wonder if people have brains. Or at least google.

                      Originally posted by artivision
                      Again i said before and i agree that less API use on runtime equals less overhead and more FPS. BUT an API doesn't define how a GPU will render the graphics, that is the job of the low level hardware driver that is unified. The path is D3D_compiler+D3D_state_tracker=>IL-IR_assembly=>HW_compiler=>low_level_driver. D3D doesn't go lower than IL, doesn't have low level access, doesn't define the shader model architecture the opposite is true, doesn't contribute to execution and creation of graphics. Even compression is part of the shader_model and deeply implemented in hardware. OGL with D3D compatibility extensions has the same speed with D3D and with bindless graphics OGL is faster. Also those you call optimizations at high level (API level) don't exist. At this level you call them Hacks and Nvidia is a MAster at this.
                      You are wrong. DirectX defines so many things of the hardware i won't even bother to start explaining to you how the process of creating a new gpu works. Unified shaders where brought by dx10. It's THE DX SPEC THAT DEFINES SHADERS MODEL. MICROSOFT TELLS NVIDIA&CO WHAT THEY NEED TO SUPPORT; NOT THE OPPOSITE.Opengl then are just ported over after dx defined the spec. Dx 10 brought also wddm1. Subsequent changes to the wddm allowed changes to the api (features you can't get on a lower wddm )
                      Optimiziations are on a compiler level. And you are talking like you think that drivers on windows works like those on linux, including the graphic stack.

                      Comment

                      Working...
                      X