Announcement

Collapse
No announcement yet.

Quake Live Now Available To Linux Gamers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • BlackStar
    replied
    Originally posted by Dragonlord View Post
    Point denied. You can not optimize beyond the ability of the language. This is a common false idea about managed languages. Each language has barriers and spending more time is not going to get you past the barriers. What you can do is pushing them a bit which is what Java did with JIT. It can do a nice job there but there's just so much a machine can figure out without knowing the higher logic of a program.
    Wrong. You are still thinking on the scale of microoptimizations, where indeed low-level languages shine.

    However, you can get much larger benefits by improving your algorithms, writing hardware specific paths (e.g. different shaders for radeons and geforces) or just plain optimizing your level layout and assets.

    Which brings us back to what I say all the time. The number crunching has to be done in C/C++ code ( therefore the game engine itself ) and only the logic is in the scripted language.
    An engine contains many more things than number crunching and logic: asset loaders, streaming, tracing, threading, scripting, networking, profiling, debugging. Stuff which hardly benefits from a lower level language.

    In fact, the lower-level you go, the harder your life becomes: bugs, memory leaks, unportable code, slower compile times, ABI issues (try exporting C++ code from a dll - nice, huh?). A higher level language will help you write more maintainable code with less bugs and less effort. You can then spend the time you saved optimizing the parts where it actually matters and that's why using a managed language can result in better performance.

    Point in case: in my last project, I created 3d world driven through a brain-computer interface. High-end graphics (VSM, SSAO, displacement mapping, stereoscopic rendering). Purely GLSL-driven engine, utilizing OpenGL 3.0 (with 2.1 fallbacks) and running on Windows, Linux and Mac OS X.

    Technology? C# and OpenTK. Performance? Scales from laptop graphics to CAVE systems. And you know what the best thing is? This was developed on Linux, but deployed on all platforms - without even recompiling the binaries.

    This is what you gain for using a managed language for your engine. It allows you to do things that are otherwise painful or flat out impossible to achieve.

    Game logic needs to sense/understand the world it acts in. This encompasses things like collision detection and various other forms of sensing and decision making. All of these require to gather fast and efficient a subset of informations from the game world to calculate the result. Obtaining these informations is a number crunching work and requires raw firepower.
    Collision detection = math, which you can move into a reusable, hand-optimized, library. You don't need to code the rest of the engine in the same language as your highly optimized math library. Unless you love writing network logic in assembly, that is. :P

    You still haven't given any specific example of memory hacks that you have written in C++ (or whatever language you are using for your engine), but couldn't have implemented in C#.
    Last edited by BlackStar; 28 August 2009, 05:19 PM.

    Leave a comment:


  • Dragonlord
    replied
    Originally posted by BlackStar View Post
    Here is where your falacy lies. You can use those extra 6 months you gained by using a managed language to optimize your code and potentially improve your performance more than 5-10%.
    Point denied. You can not optimize beyond the ability of the language. This is a common false idea about managed languages. Each language has barriers and spending more time is not going to get you past the barriers. What you can do is pushing them a bit which is what Java did with JIT. It can do a nice job there but there's just so much a machine can figure out without knowing the higher logic of a program.

    Also note that you can abstract the intensive number crunching parts of an engine into ultra-optimized native dlls. You can then implement the rest of the engine in a managed language with no or miniscule loss in performance.
    Which brings us back to what I say all the time. The number crunching has to be done in C/C++ code ( therefore the game engine itself ) and only the logic is in the scripted language.

    On the contrary, the more power you have the more you can afford to squander. No game or engine is released 100% optimized anymore - hardware is simply evolving too fast for that. Optimize for current gen cards? Your optimizations are obsolete when your game is released two years later. Optimize for next gen cards? Your game runs suboptimally on current hardware.
    Which is why I state over and over again that the current way of designing game engines is totally obsolete and unsuitable for todays hardware. Wasting processing power on unoptimized code or manage language house keeping is not the solution just fighting the symptoms.


    Better optimized programs are good, noone will argue against that. However, this paragraph doesn't make any sense. What the hell are "low level informations"? Why do you need "number crunching" to gather them?
    Game logic needs to sense/understand the world it acts in. This encompasses things like collision detection and various other forms of sensing and decision making. All of these require to gather fast and efficient a subset of informations from the game world to calculate the result. Obtaining these informations is a number crunching work and requires raw firepower.

    Leave a comment:


  • Dragonlord
    replied
    Originally posted by smitty3268 View Post
    That's all true for static compilers, but I was talking about dynamically generated code, like the Java and .NET runtimes. They can compile the code down to a pseudo-assembly and then compile that further down to binary during runtime after checking out what parts of the code are accessed often or any other type of runtime checking.
    JIT simply means to compile code "on demand". This has nothing to do with the compiler knowing about who the code is used. This is like having a static compiler and just compiling the code where needed instead of running it through an interpreter. The end result is though still the same. Both compilers can not figure out the logic behind the code and therefore are unable to do the optimizations a human brain can do by structuring the code properly.

    Leave a comment:


  • BlackStar
    replied
    Originally posted by Dragonlord View Post
    Honest answer: 6 months later and it's fucking optimized and working well.
    Here is where your falacy lies. You can use those extra 6 months you gained by using a managed language to optimize your code and potentially improve your performance more than 5-10%.

    Also note that you can abstract the intensive number crunching parts of an engine into ultra-optimized native dlls. You can then implement the rest of the engine in a managed language with no or miniscule loss in performance.

    Not to mention that languages like Ocaml are faster than C/C++ for a large number of tasks.

    The right tool for the right job? 100% agreed. As time passes, managed languages are becoming suitable for a wider range of tasks. Other than low-level math, there's little that managed languages cannot do right now - and math isn't an intractable problem either (see Mono.Simd, which was added exactly because game developers requested it.)

    Just because you have horse power to spare doesn't mean you have to squander it.
    On the contrary, the more power you have the more you can afford to squander. No game or engine is released 100% optimized anymore - hardware is simply evolving too fast for that. Optimize for current gen cards? Your optimizations are obsolete when your game is released two years later. Optimize for next gen cards? Your game runs suboptimally on current hardware.

    Use it make it run on systems with less horse power or to implement better game play. Many think this is just a game logic task but boy they are wrong. To do the right game logic you need always low level informations and gathering those requires number crunching to get it done fast.
    Better optimized programs are good, noone will argue against that. However, this paragraph doesn't make any sense. What the hell are "low level informations"? Why do you need "number crunching" to gather them?

    Also, you still haven't listed any specific hack you have done in your program that you couldn't do in a managed language like C#.
    Last edited by BlackStar; 28 August 2009, 02:46 PM.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by Dragonlord View Post
    The problem with compilers is that they can only analyze an AST. Such an AST does though not contain higher level informations about the the structure of code and how data is accessed. Furthermore the compiler does not now how often code is executed on runtime.
    That's all true for static compilers, but I was talking about dynamically generated code, like the Java and .NET runtimes. They can compile the code down to a pseudo-assembly and then compile that further down to binary during runtime after checking out what parts of the code are accessed often or any other type of runtime checking. It's conceivable that a program could even try a couple of different strategies, measure which performs better, and save that profile to be used in future sessions, all automatically. Javascript is the same, in that it is interpreted at runtime before being compiled to it's final form (at least for most - Chrome might do it statically).

    Originally posted by Dragonlord View Post
    Assembler though is still used today. For example console emulation relies still on assembler to get performance out of the difficult parts simply because they need to spare the few extra cycles introduced by a generic solution a compiler produces.
    Certainly there are many places where it's almost essential. Codecs and video drivers are prime examples, as well as lots of image/video filters. But it is mostly used when you are trying to optimize small hotspots in your code that get called over and over again, rather than a high-level whole-program optimization type of thing. There just gets to be too much for a human brain to remember at that level.

    Leave a comment:


  • Dragonlord
    replied
    Honest answer: 6 months later and it's fucking optimized and working well. Sorry to be blunt but I take a well optimized C/C++ engine over any engine written in a managed language any time. As I said: use the right tool for the right task. Number crunching, deep down, hardware close engine stuff is C/C++ while game logic is a managed language. Just because you have horse power to spare doesn't mean you have to squander it. Use it make it run on systems with less horse power or to implement better game play. Many think this is just a game logic task but boy they are wrong. To do the right game logic you need always low level informations and gathering those requires number crunching to get it done fast.

    Leave a comment:


  • BlackStar
    replied
    Originally posted by Dragonlord View Post
    That's not the problem. It's about juggling memory around in a specific way as well as totally going beyond calling functions doing nifty memory hacks. I for example need those to get the performance I need to do the high end stuff I have working right now. These kind of memory tricks are simply impossible in a JITed or interprated language. Hence a JITed language can not ( by design ) ever do such things.
    Outside of SSE-optimized math code(*), what kind of "memory hacks" are you using that you cannot implement in, say, C# (specific examples, please)? How much performance do these buy you? How long did it take you to implement those hacks?

    I'm not arguing for writing the raw 'engine' in C#, Ocaml or (shudders) Javascript - although this is certainly possible (and could even result in better performance, in the case of Ocaml). C and C++ won't go away any time soon. Modern games, however, rely heavily on scripting: Python, Lua, AngelScript, C# are used extensively, even on consoles. The iPhone is programmed with Obj-C or C#. The Android with Java.

    The writing is on the wall. C/C++ will diminish in use as CPU power and JIT technology improves. It's natural evolution: raw assembly became marginalized as computing power and application complexity increased. Low-level languages will slowly but surely follow the same path.

    A rhetorical question: Which is better, using a higher-level language and shipping 6 months earlier or using a lower-level language and delivering 5 or 10% better performance?


    (*) C# actually supports vector instructions through Mono.Simd.

    Leave a comment:


  • Dragonlord
    replied
    The problem with compilers is that they can only analyze an AST. Such an AST does though not contain higher level informations about the the structure of code and how data is accessed. Furthermore the compiler does not now how often code is executed on runtime. Compilers can only optimize what is statically known ( static syntax ) but they have no chance on earth to optimize a runtime behavior. It's simply impossible because determining such a behavior is similar to solving the halting problem which is proven to be impossible to solve. The best thing is guessing ( or heuristics ) but guessing fails in the hard cases and game engines ( the real deals, not the simple ones ) are very hard cases since they push the machine wherever they can. The optimizations I talk about involve often refactoring entire code groups to achieve the desired performance gain. And this information unfortunately is not contained in an AST deductible from source code.

    What goes for assembler this is still true but on a different level. Pure C code is more or less assembler but with better wording and easier to comprehend syntax as well as hiding some tasks from the programmer since messing with them is not required. Assembler though is still used today. For example console emulation relies still on assembler to get performance out of the difficult parts simply because they need to spare the few extra cycles introduced by a generic solution a compiler produces. Nevertheless proper code structure is the best optimization out there although you can not force a language outside the boundaries. JS for example will always be inferior to C when it comes to pure number crunching since managed memory is getting in the way. And as soon as you disable managed memory you get in devils kitchen with C# being the prime example of the kind of mess you can get yourself in ( they allow to violate the managed code allowing you to kill yourself ). Hence you will always need a machine language like C/C++ for the raw firepower and a scripted language for highly structure game logic. Trying to swap their places simple calls for troubles and is not worth it ( important hacker saying: "never fix what ain't broken" ). Hopefully this explained it a bit better.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by Dragonlord View Post
    Nope, I'm 100% sure this is not going to happen since heuristics "guess" and in such cases only very few tricks work. Such memory hacks are applied across large sections of code and reflect in the design of the code itself. As good as compilers are they can not replace a human brain. As somebody once said so well: "The best optimizer is sitting between your ears".
    Well, I'm still not convinced. Can you tell me why, exactly, each of your "memory hacks" works? And if so, why wouldn't a program monitoring allocations and the flow of your program be able to detect that type of condition automatically? The only thing I can think of would be if the necessary conditions are somewhat difficult to detect and it would impose a performance hit on most programs to check for - and so that type of optimization wouldn't ever get created. Ok, I guess I have half-way convinced myself with that argument.

    As far as compilers not replacing the human brain, that's absolutely correct - for now. I still think a computer could get there eventually. People used to think that C was too high level an abstraction and that compilers wouldn't be able to compile it as efficiently as a human in assembly. I think that today the vast majority of programmers wouldn't be able to match a machine at that activity, at least not in a reasonable timeframe. Eventually, we may say the same about higher level languages too.

    Leave a comment:


  • Remco
    replied
    Originally posted by Dragonlord View Post
    Nope, I'm 100% sure this is not going to happen since heuristics "guess" and in such cases only very few tricks work. Such memory hacks are applied across large sections of code and reflect in the design of the code itself. As good as compilers are they can not replace a human brain. As somebody once said so well: "The best optimizer is sitting between your ears".
    I guess you won't find the next Crysis on Javascript any time soon. But you can get a decent performance if you don't reach for such great heights. A 3D game is certainly possible, and I am going to make a rough prediction that the web will be bombarded with casual games in 5 years. In time, if successful, Js language development will be prompted to cater to high-performance application developers, which may result in SIMD support, manual memory management, and the nebulous 'memory tricks' that you're talking about.

    In other news: I found a Firefox plugin called Canvas 3D, which brings an OpenGL ES 2.0 context to the <canvas> element. I haven't played with it yet, but it looks promising.

    Leave a comment:

Working...
X