Announcement

Collapse
No announcement yet.

How Open-Source Allowed Valve To Implement VULKAN Much Faster On The Source 2 Engine

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • dragorth
    replied
    Originally posted by b15hop View Post
    On that note, where do we learn how to code in Vulcan. I feel like it's going to be a gem if they pull it off right. I really hope valve get Vulcan up to scratch because it's about time.

    I would be very disappointed if nvidia doesn't support Vulcan on at least 700 series GPU's. My 780gtx would be a paperweight otherwise. I have this feeling they will try push the latest GPU as soon as Vulcan comes out. Even though really, it should be a huge performance increase for certain tasks.
    You can not learn Vulkan directly, at least not currently. Vulkan is based on AMD's Mantle, so if you want to use something that should be similar to about 80%, at a guess, you could test Mantle on Windows if you have a compatible AMD GPU. There will be differences, but we won't know what they are until the Vulkan spec is released. Or, you could take a look at DX12, on Windows 10. The model is very similar, and you get the benefit of not learning an obsolete api.

    It will be up to Nvidia what GPUs they will support with Vulkan, though Vulkan does require at least an OpenGL ES 3 capable GPU. Unless someone makes an Open Source driver for the cards Nvidia does not support.

    Leave a comment:


  • b15hop
    replied
    Originally posted by Kano View Post
    You are completely wrong. Nvidia already outperformed DirectX 9 without Vulkan on a highend Intel box (definitely no AMD cpu!) with Source, Vulkan was tested with an Intel GPU (which is only possible with an Intel CPU), so in what way did you see that AMD leads performance in any test? You should stop dreaming.
    On that note, where do we learn how to code in Vulcan. I feel like it's going to be a gem if they pull it off right. I really hope valve get Vulcan up to scratch because it's about time.

    I would be very disappointed if nvidia doesn't support Vulcan on at least 700 series GPU's. My 780gtx would be a paperweight otherwise. I have this feeling they will try push the latest GPU as soon as Vulcan comes out. Even though really, it should be a huge performance increase for certain tasks.

    Leave a comment:


  • techzilla
    replied
    AMD is the driving innovating force on the GPU Stack,

    Personally I use AMD for all my CPU's and GPU's, as I see them as the most beneficial player. Its on their shoulders all the reversed community based projects are standing, and their work is paying off on new ARM Gallium drivers as well. Of coarse I'd love to see AMD in the ARM space 100%, and I plan on getting one of the new Seattle based systems soon as it's launched.

    Intel is very Linux compatible, no question, they ensure their stuff works, and they deserve props for their contributions....but they don't create the foundation that AMD creates, and Nvidia.... well they are just leaches, plain and simple. If they did anything it would be just to spite AMD, so they can get back to ending open drivers for good. SO for my money, and all systems I influence, I use AMD.

    Plus, my 16 core Opteron ain't beanbag,

    Leave a comment:


  • dragorth
    replied
    Originally posted by Qaridarium
    If you only push integer numbers from linux kernel to Vulkan GPU then you do have a 8 (Integer)core CPU with the 4 module design.

    Also Linux kernel speed is also only based on Integer-Numbers.

    So i am right if you use Linux+Vulkan the AMD 8 core (4 modules) are the very best you can get.
    Sorry, but that is incorrect logic.

    First, if you "push" integers to Vulkan, then the GPU is doing the work, and the CPU doesn't need either integer or FP numbers, it simply needs a copy operation.

    Second, Linux kernel speed is not only based on Integer Numbers, but on whatever is useful for the use case and the compiler options that are chosen, many of which can optimize into SIMD instructions for the speed increase that offers.

    Third, AMDs strategy does not work in compute heavy environments, without an add in card, such as a GPU. Since AMD makes GPUs, this could promote selling more of their products, but they don't supply the same level of support as their competition for those High End use cases.

    Forth, on the APU side, This strategy does work, if the program can take advantage of OpenCL, as the built-in GPU functions as a better solution than do the AVX extensions, but at the cost of developer overhead and buy-in, which they haven't yet got.

    Don't get me wrong, I think this is the future, which Vulkan, DX12 and OpenCL 2.1 are showing, but they have left performance behind for the tasks that are here and now to their competitors, which is not what they should be doing.

    AMD has always bet big, and this bet may pay off in the long run, but for now, I buy their GPUs, and leave their CPUs alone.

    Leave a comment:


  • Kano
    replied
    You are completely wrong. Nvidia already outperformed DirectX 9 without Vulkan on a highend Intel box (definitely no AMD cpu!) with Source, Vulkan was tested with an Intel GPU (which is only possible with an Intel CPU), so in what way did you see that AMD leads performance in any test? You should stop dreaming.

    Leave a comment:


  • Kano
    replied
    Your logic is wrong. There are no full 8 cores from AMD, the maximum for desktop users have been 6. The 8 core is a 4 module design with 4 fpu only and the double number of integer units. And that with the lowest speed/core possible. Better put it in the dustbin and don't talk about that anymore.

    Leave a comment:


  • Kano
    replied
    In other words: you buy AMD gpus as excuse that you play on Windows

    Leave a comment:


  • d2kx
    replied
    I own a AMD E-350 APU in my notebook, which still works amazingly with the opensource stack for everyday usage and H264 video until today

    I owned both a HD 6870 and R7 260X which also both worked nice with the open source stack. I donated my HD 6870 to Qaridarium and my R7 260X to my parents

    But I've got a R9 285 in my main desktop right now, and am much looking forward towards the AMDGPU drop. The proprietary Catalyst driver has got a lot better, but is still nowhere near where it should be. Stability, video decoding, compatibility are a total disaster if compared to the open source stack.

    - still no Vsync with Xv
    - tearing all over the place unless you activate "Tear Free Desktop" aka triple-buffering-with-input-lagg-everywhere
    - game stuttering due to super slow OpenGL binary shader cache when you play games/maps for the first time (somewhat improved with fglrx 15.200 beta)
    - broken app profiles (hl2_linux binary detection that limits max fps and disables multi threading ingame for Team Fortress 2, where you have to rename the binary to something else to work around the app profile and get better performance)
    - random crashes in basically any application that uses OpenGL (Chrome, Spotify, ...)
    - random corruption in any OpenGL application from time to time
    - the famous render-fullscreen-OpenGL-windows-16px-too-far-to-the-right bug (fixed with fglrx 15.200 beta)
    - 100% CPU usage with VAAPI when Vsync is enabled
    - no VAAPI support for multithreaded video decoders like the one Kodi is using
    - overall slow OpenGL performance due to heavy CPU overhead (GPU usage is barely at 50% if at all with Counter-Strike: GO and similar titles)

    So yeah, really looking forward to that AMDGPU drop.

    Leave a comment:


  • valeriodean
    replied
    Originally posted by bridgman View Post
    It was never "supposed to be ready to show itself" at any particular time -- I don't know exactly what Alex said at XDC but at best it would have been a casual estimate based on progress so far. Unfortunately the pattern with IP review is always the same -- the first 75% is easy, the next 20% is much harder, and the last 5% seems like it will never happen.

    It's down to maybe the last 1% now, so things are looking promising.



    Right -- SPIR-V is intended as a transfer IR not an internal IR, so if it replaced anything it would be TGSI. Don't think there has been much discussion of that yet though.
    Thank you. So... I will wait for decisive news by the end of may

    Leave a comment:


  • dragorth
    replied
    Originally posted by SystemCrasher View Post
    Oh yeah, and Adolf Hitler has been so excited about one ruler for whole world. "Ein Volk, ein Reich, ein F?hrer" - sounds familiar? Or maybe similar? That's what suxx about MS _and_ those windows-minded people. As a result you turn into stubborn bastard who is unable to imagine any new ways of doing things and only blindly follows their fuhrer.
    Um, project much? I only said I wanted them to succeed, I did not say I want them to rule the world. Or even just the computer space. I currently use OS X, I use Linux and BSD servers online, I have Windows VMs and none of that says I want Windows to be the only OS on the market.

    Secondly, comparing Windows 10 success to Hitler, one of the worst humanitarian atrocities in our history, is both an insult to those that died, and insulting to the very real tragedies that are occurring in our governments that seem to want to go down those same paths again.

    Third, I want HoloLens to succeed, and no one else is close to this type of user interface on the market. Google has Glasses, but that interface is not up for the same type of screen. I wear glasses all the time anyway, so something like this will really be living in the future. And only if HoloLens succeeds will competitors come to market, since the hardware industry is so risk averse.

    Now, if you want to start working on an interface for LinuxLens, go for it. I want that type of device. In the mean time, stop ascribing human characteristics to companies, because they are not people, they aren't even alive. If you feel the need to ascribe blame to something, there are people at the top that might be negligent, but that company doesn't feel a thing.

    Leave a comment:

Working...
X