Announcement

Collapse
No announcement yet.

Unigine Engine Looks To Wasteland 2

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Kano
    replied
    Well nvidia seems to be faster with opengl, whats different there then?

    Leave a comment:


  • elanthis
    replied
    Originally posted by ZedDB View Post
    Here you have a test with DX11 scores and opengl scores: http://www.sweclockers.com/recension...850/7#pagehead
    I think they are quite close in performance. However I think Nvidia and AMD spends more time on optimizing direct3d than they do with opengl because that is what most games currently use on the PC platform.
    The core code is likely identical in terms of acceleration.

    The differences between GL and D3D for performance stem from GL's state model, the difficulty of threading it properly, and the extra checks that have to be run on many object uses because of the highly mutable object model. The latter at least is being slowly fixed with each successive version of GL (e.g., ARB_texture_storage), but has a long way to go. The threading problem cannot be fixed without literally scrapping and redesigning the API, as the fundamental problem is that the API expects and requires magic global hidden state (which can be thread-local, but that is not free), and in the short term requires scrapping and redesigning WGL and GLX (the changes from GL3 made it much better, but still far from perfect). The GL state model is just utter shit and needs to be shot in the face six times with a high-powered rifle; there's no fixing it, simply throwing it away and starting over. The API is simply trash, and even Khronos knows that fact (hence the Longs Peak fiasco). They just aren't willing to do anything about it; they introduce things like Core profile that break back-compat in little minor ways that barely affects anything at all while refusing to just introduce a revised API that breaks things in larger but actually useful ways.

    The biggest problem with GL as an app developer is that -- on Windows -- the drivers are simply buggy and unstable. I still run into frequent driver crashes or just crazy performance problems that are simply bugs. The problems usually get fixed (though a few really bad long-term bugs haven't been fixed even after two years on NVIDIA's drivers) eventually, but the releases that fix one set of bugs inevitably just cause more.

    Don't even get me started on what a horrifically bad shading language GLSL is, either. It's only just becoming sane with GLSL 4.20, which means you can't actually use any of its features since most of us need to target GL 3.1 hardware (Intel) or GL 3.2 operating systems (OS X) or just stick to GLSL|ES 2.0 (iOS, Android, NativeClient).

    Leave a comment:


  • Kano
    replied
    As mainly the tesselation implementation seems to be affected how about optimizing that a bit?

    Leave a comment:


  • binstream
    replied
    Originally posted by Kano View Post
    What attribute do you use for 30% less speed?
    The problem is in OpenGL implementation from AMD.

    Leave a comment:


  • Kano
    replied
    What attribute do you use for 30% less speed?

    Leave a comment:


  • binstream
    replied
    Originally posted by Kano View Post
    And your tests show no extreme different performance with heaven between dx11 and opengl and on your systems you get better performance using crossfire with opengl?
    There is no extreme difference in performance with DX11 and OpenGL

    Leave a comment:


  • binstream
    replied
    Originally posted by benmoran View Post
    Binstream,

    I just wanted to say thank you for supporting the open source drivers as well. With Oil Rush the performance is really impressive, all things considered. I finally had time to play past the second chapter, and it really is a lot of fun.

    I'm a Wasteland 2 backer, and really hope they go with Unigine. I think it would be a nice fit for the type of game they're making. Wasteland 2 would also give Unigine a bit of publicity, and hopefully get more people interested in using it.
    Thank you for your kind words!

    Leave a comment:


  • benmoran
    replied
    Well I agree with that. In the end it's up to the Inxile guys to determine if it's a good match for them. We'll just have to wait and see I guess. If they don't end up going with Unigine, it would interesting if they would explain exactly why they didn't. It's all just speculation now.

    Leave a comment:


  • yogi_berra
    replied
    Originally posted by benmoran View Post
    I'm a Wasteland 2 backer, and really hope they go with Unigine. I think it would be a nice fit for the type of game they're making. Wasteland 2 would also give Unigine a bit of publicity, and hopefully get more people interested in using it.
    Imagine how bad Unigine will look if they don't use it.

    They aren't making the choice of game engine on the engine itself, but on documentation of the engine, the tools that come with the engine, and how easy it is to integrate their tools with the engine. They're getting my money no matter what engine they use, but I would rather have them spend $100,000 for an engine that doesn't take a lot of effort to start scripting then to take a free license and have to spend a month rebuilding the tools just to get to the point of where they can start scripting.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by Kano View Post
    First of all i have got nothing against Unigine, but some issues should be adressed. It is a bit similar to Rage issues with AMD drivers. If you want you can blame AMD for everything If you want to see crossplattform games using OpenGL 4/DX 11 features this should not result in such a performance penalty that even their own game does not use em. It is hard to believe that they can not reproduce the same results at their own office. Maybe Unigine could get some hints from AMD how to optimize the code in a way those cards work faster. Most likely AMD does that on the fly for DX11, maybe just rename the binary and try to bench again. This could be a bit tricky however because the "real" binary is executed by a launcher with the selected settings. I really want to see Linux games that use the latest OpenGL 4 features but i doubt that a bit even when a game will use this engine. A first step could be working together with AMD to get an OpenGL profile for CF - Nvidia has got SLI for that as well - sometimes it would be cheaper to add the same card again instead of buying a new one, but thats pretty useless currently for OpenGL - especially on Linux. All you can do is to get a fast card directly even when your board would support more cards.
    I think it's pretty common for engines to favor nvidia or AMD over the other. That's why GPU hardware reviews generally include multiple tests (except for Phoronix, which uses 100 variations of Q3).

    Anyway, out of curiosity, what happens if you disable Catalyst AI? I don't know if that is tweaked to work better on D3D11/Windows than Linux, or if it wouldn't make any difference.

    Leave a comment:

Working...
X