Announcement

Collapse
No announcement yet.

Thanks To Vulkan, We Should Be Seeing More 64-bit Linux Games

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by schmidtbag View Post
    @eyedee and bridgman
    But, doesn't the code need some significant changes just to go from Windows to Linux anyway, regardless of Vulkan?
    Yes, but doing the changes twice is more than once. Also consoles are 64-bit, so porting to 32-bit would mean even more work.

    Comment


    • #22
      Yep, I was thinking about existing Linux games that had already been ported over OpenGL and released as 32-bit binaries.

      Agree that if a game hasn't been ported yet then Vulkan won't have an impact other than maybe for any game dev thinking "eww, OpenGL, I'll stick with Windows"
      Test signature

      Comment


      • #23
        Originally posted by SaucyJack View Post
        64-bit has always performed better. It's mostly a case of ultra lazy game devs and I'm sure Visual Studio has something to do with it too.
        Why do you think 64-bit apps perform better? Unless they're written specifically to target new CPU functionality, recompiling for 64-bit from 32-bit should give you a modest performance drop. Pointers will be twice as large, for example, taking more memory and requiring more memory bandwith for effectively performing the same task.

        Comment


        • #24
          AFAIK there's a tradeoff between bigger pointers (slower) and more registers to keep the active pointers in (faster).

          My impression was that on balance 64-bit typically ended up a bit faster.
          Test signature

          Comment


          • #25
            If your game (or other program) really needs better data locality by using smaller pointers, you can get that even in 64-bit. Create a memory block and instead of using pointers, use 32-bit offsets to that memory block. Or even 16-bit offsets with a multiplier. Since most pointer targets are aligned on 8 bytes a 3 bit shift is easy. Because of pipelining these add and shift operations are nearly free.

            Comment


            • #26
              Originally posted by dkasak View Post

              Why do you think 64-bit apps perform better? Unless they're written specifically to target new CPU functionality, recompiling for 64-bit from 32-bit should give you a modest performance drop. Pointers will be twice as large, for example, taking more memory and requiring more memory bandwith for effectively performing the same task.
              It's not about 'think' or 'should', it's about facts. Check out 32bit vs 64bit benchmarks on Phoronix from a while back - in every case 64 bit was faster, sometimes even several hundred percent faster (not only games, but with stuff like compilation, web servers etc). So it turns out the size of pointers doesn't matter as much as you expect.

              Comment


              • #27
                Originally posted by SaucyJack View Post
                64-bit has always performed better. It's mostly a case of ultra lazy game devs and I'm sure Visual Studio has something to do with it too.
                Visual Studio doesn't run/compile the code. I could care less if the IDE was 32, 64 or 16 bit.

                Comment


                • #28
                  Originally posted by kaprikawn View Post
                  Definitely this. I really irks me that I have to enable multilib pretty much just for Steam. If it weren't for Steam I could probably do a pure 64bit install.

                  Having entire libraries installed just for one use hurts me in the OCD. Like having to install Java just for Minecraft...urg!
                  Now imagine this, for Gentoo user, where i have to compile(automatically, thanks FSM!)

                  Comment


                  • #29
                    Meh that won't happen, standard practice for proprietary is releasing 32 bit and so will remain for all proprietary software already 32-bit, unless really needed (workstation stuff).

                    But I don't care. The solution is sandboxing. I'm still waiting the time we can truly sandbox stuff, so all that crappy proprietary stuff will stop filling my system with 32bit libraries (maybe even non-64-bit-safe so I have to hack them to use their crappy libs from another place, hooray!).

                    I don't give a flying about having 400 copies of the same libraries in my system, nor about wasting dozens of GBs for that, what I want is being able to install stuff without sorting manually a %&%£$ dependency hell for every shitty proprietary software I try to install in my non-%&£%£-Ubuntu-LTS linux system.

                    Comment


                    • #30
                      I asked to Dolphin developers about Vulkan and one of them replied to me!

                      Originally posted by degasus' pid='401002' dateline='1456326970
                      - Do you think Vulkan could evolve to become something better than the best OpenGL implementation?
                      They have different use cases. Vulkan tries to match the hardware as closely as possible, OpenGL tries to provide an API for GFX development. So very often, OpenGL just match better. But for eg game engines, Vulkan is by far the better choise. But Vulkan requires more knowledge about GPU internals, and much more code to write. So the common way will likely be to only use Vulkan if you need to...

                      - If that's possible: What would be needed to make projects like Dolphin have as more advantages as possible by a new iteration of this new graphics API?
                      Writing a new video backend. There was a pull request some weeks ago for the new D3D12 video backend, so just the same again for Vulkan.

                      - Do you think Vulkan is a step in the right direction?
                      I think it will help a bit for dolphin. But the biggest improvements are already done within the OGL backend. AZDO with persistent mapped buffers are a good example. Another good improvement may be the PipelineObject cache. Right now, each GL driver should cache them already. With Vulkan, we have to do this. As we're on a higher level, this cache may be a bit faster. Especially if it's done *very* bad in the driver (looking at those mobile vendors...).

                      In general, I fear Vulkan might not live for the next 20 years. It's on a lower level, so the API needs to be redesigned earlyer... Still fine, but everyone should check their requirements...

                      - What do you think are the biggest limitations and issues of graphics APIs?
                      At which level the developer wants to program. Writing shaders is simple, keeping care about memory management isn't. There is a point why this has such a high overhead within GL drivers. Lower level allows you to optimize everything, but it requires you to take care about everything. For me, it sounds like Vulkan and D3D12 matches the hardware quite well. But most programmers don't want to program for the hardware directly.

                      For dolphin, there is a huge bottleneck which Vulkan tries to fix for game engines. The biggest performance gain of Vulkan is because of cached command buffers. Generating command buffers needs lots of hardware dependent state checks and validation. On GL, there are no command buffers (there were, called display list. They matched the hardware by time, but some years later, they were only emulated within the driver). Everything is queued on creating directly. Vulkan allows to cache and reuse those command buffers, and to generate them threaded. The creation of command buffers is also high in dolphin, *but* dolphin can't reuse command buffers. The emulated GPU doesn't match to the current design of command buffers, so we have to generate a stream of commands and to use them once. So the main improvement of Vulkan and D3D12 doesn't apply for emulation :/

                      I fear this might hurt us very badly. Let's hope all of those vendors still tries to be as fast as possible on command buffer generation. This *was* a bottleneck in GL, but on Vulkan, this is only a bottleneck for badly written games, like every emulator....

                      - What do you think it would be the best approach to evolve?
                      Flexibility, low overhead, stable API. I guess we have to choose....

                      - Why is NVIDIA able to provide good graphics drivers and the competition releases such broken drivers? Do they lack the manpower and talent to make it happen? Are there hopes this would change?
                      As far as I know, AMD is more conform with the GL specs as NVidia. But most GFX devs just develop with NVidia, and if other vendors have a different behavior, they are called broken.
                      But I think that's not the point here. It's really hard to write a conform GL driver. But it's by far harder to write a fast GL driver. GL usually doesn't specify what the hardware shall do, neither how fast (or slow) some calls might be. It's just defining the resulting behavior. There are often different ways to get a result, and by far more ways for the hardware to do so. There are usually no meaningful hints how the driver shall do it, so they have to guess. It seems like NVidia spend more time to guess what the game wants to do.
                      The other point is about GPU utilization vs CPU overhead. Most vendors care most about GPU utilization. So a game should still run fine on very high resolution. Driver overhead is more about getting a very shitty game (like emulators...) still running fast while the GPU is mostly idle. Only some strange application will profit if the vendor optimize here, so it seems like mostly NVidia tries to do so...

                      - Why are OpenGL/Vulkan implementations proprietary most of the time and they don't care about the MESA/Gallium3D approach? Do you think companies like NVIDIA and others hide their "secret" sauce in the drivers and not going to make better drivers until they move most of their secrets to binary blobs aka firmwares? What are they so stubborn to hide?
                      Oh, please ask me technical questions
                      But both Intel and AMD mesa drivers are written by paid developers. NVidia is the one trying to do everything to get their blob working as good as possible.


                      - Would you please consider about making an article about Vulkan? Of course take your time. It would be amazing if being a bit as amazing as your OpenGL article!
                      I'm pretty sure there will be one


                      Hello. I find your project amazing in many levels. Congratulations for the OpenGL article, I really hope AMD and Intel wake up and someday catch Nvidia in FOSS drivers. You all aren't only able to kno

                      Comment

                      Working...
                      X