Announcement

Collapse
No announcement yet.

Quake Live Now Available To Linux Gamers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by Dragonlord View Post
    JITed languages can not do such things since they are limited to a certain memory model which gets in the way. That's the main reason why no JITed language can outperform a properly written engine in a C++ language.
    Maybe Js will at some point get a "weak" modifier and a "delete" keyword, you never know.

    Comment


    • #62
      That's not the problem. It's about juggling memory around in a specific way as well as totally going beyond calling functions doing nifty memory hacks. I for example need those to get the performance I need to do the high end stuff I have working right now. These kind of memory tricks are simply impossible in a JITed or interprated language. Hence a JITed language can not ( by design ) ever do such things.

      Comment


      • #63
        Originally posted by Dragonlord View Post
        That's not the problem. It's about juggling memory around in a specific way as well as totally going beyond calling functions doing nifty memory hacks. I for example need those to get the performance I need to do the high end stuff I have working right now. These kind of memory tricks are simply impossible in a JITed or interprated language. Hence a JITed language can not ( by design ) ever do such things.
        I don't think it's completely impossible. Eventually someone might come up with some good heuristics about the types of situations that those memory tricks are useful in, and then add that to the JIT compiler to automatically implement.

        Ok, so that kind of stuff is probably a long ways away, but I wouldn't be surprised to see it happen eventually. Think how far things have progressed in the last 20 years for comparison.

        Comment


        • #64
          Nope, I'm 100% sure this is not going to happen since heuristics "guess" and in such cases only very few tricks work. Such memory hacks are applied across large sections of code and reflect in the design of the code itself. As good as compilers are they can not replace a human brain. As somebody once said so well: "The best optimizer is sitting between your ears".

          Comment


          • #65
            Originally posted by Dragonlord View Post
            Nope, I'm 100% sure this is not going to happen since heuristics "guess" and in such cases only very few tricks work. Such memory hacks are applied across large sections of code and reflect in the design of the code itself. As good as compilers are they can not replace a human brain. As somebody once said so well: "The best optimizer is sitting between your ears".
            I guess you won't find the next Crysis on Javascript any time soon. But you can get a decent performance if you don't reach for such great heights. A 3D game is certainly possible, and I am going to make a rough prediction that the web will be bombarded with casual games in 5 years. In time, if successful, Js language development will be prompted to cater to high-performance application developers, which may result in SIMD support, manual memory management, and the nebulous 'memory tricks' that you're talking about.

            In other news: I found a Firefox plugin called Canvas 3D, which brings an OpenGL ES 2.0 context to the <canvas> element. I haven't played with it yet, but it looks promising.

            Comment


            • #66
              Originally posted by Dragonlord View Post
              Nope, I'm 100% sure this is not going to happen since heuristics "guess" and in such cases only very few tricks work. Such memory hacks are applied across large sections of code and reflect in the design of the code itself. As good as compilers are they can not replace a human brain. As somebody once said so well: "The best optimizer is sitting between your ears".
              Well, I'm still not convinced. Can you tell me why, exactly, each of your "memory hacks" works? And if so, why wouldn't a program monitoring allocations and the flow of your program be able to detect that type of condition automatically? The only thing I can think of would be if the necessary conditions are somewhat difficult to detect and it would impose a performance hit on most programs to check for - and so that type of optimization wouldn't ever get created. Ok, I guess I have half-way convinced myself with that argument.

              As far as compilers not replacing the human brain, that's absolutely correct - for now. I still think a computer could get there eventually. People used to think that C was too high level an abstraction and that compilers wouldn't be able to compile it as efficiently as a human in assembly. I think that today the vast majority of programmers wouldn't be able to match a machine at that activity, at least not in a reasonable timeframe. Eventually, we may say the same about higher level languages too.

              Comment


              • #67
                The problem with compilers is that they can only analyze an AST. Such an AST does though not contain higher level informations about the the structure of code and how data is accessed. Furthermore the compiler does not now how often code is executed on runtime. Compilers can only optimize what is statically known ( static syntax ) but they have no chance on earth to optimize a runtime behavior. It's simply impossible because determining such a behavior is similar to solving the halting problem which is proven to be impossible to solve. The best thing is guessing ( or heuristics ) but guessing fails in the hard cases and game engines ( the real deals, not the simple ones ) are very hard cases since they push the machine wherever they can. The optimizations I talk about involve often refactoring entire code groups to achieve the desired performance gain. And this information unfortunately is not contained in an AST deductible from source code.

                What goes for assembler this is still true but on a different level. Pure C code is more or less assembler but with better wording and easier to comprehend syntax as well as hiding some tasks from the programmer since messing with them is not required. Assembler though is still used today. For example console emulation relies still on assembler to get performance out of the difficult parts simply because they need to spare the few extra cycles introduced by a generic solution a compiler produces. Nevertheless proper code structure is the best optimization out there although you can not force a language outside the boundaries. JS for example will always be inferior to C when it comes to pure number crunching since managed memory is getting in the way. And as soon as you disable managed memory you get in devils kitchen with C# being the prime example of the kind of mess you can get yourself in ( they allow to violate the managed code allowing you to kill yourself ). Hence you will always need a machine language like C/C++ for the raw firepower and a scripted language for highly structure game logic. Trying to swap their places simple calls for troubles and is not worth it ( important hacker saying: "never fix what ain't broken" ). Hopefully this explained it a bit better.

                Comment


                • #68
                  Originally posted by Dragonlord View Post
                  That's not the problem. It's about juggling memory around in a specific way as well as totally going beyond calling functions doing nifty memory hacks. I for example need those to get the performance I need to do the high end stuff I have working right now. These kind of memory tricks are simply impossible in a JITed or interprated language. Hence a JITed language can not ( by design ) ever do such things.
                  Outside of SSE-optimized math code(*), what kind of "memory hacks" are you using that you cannot implement in, say, C# (specific examples, please)? How much performance do these buy you? How long did it take you to implement those hacks?

                  I'm not arguing for writing the raw 'engine' in C#, Ocaml or (shudders) Javascript - although this is certainly possible (and could even result in better performance, in the case of Ocaml). C and C++ won't go away any time soon. Modern games, however, rely heavily on scripting: Python, Lua, AngelScript, C# are used extensively, even on consoles. The iPhone is programmed with Obj-C or C#. The Android with Java.

                  The writing is on the wall. C/C++ will diminish in use as CPU power and JIT technology improves. It's natural evolution: raw assembly became marginalized as computing power and application complexity increased. Low-level languages will slowly but surely follow the same path.

                  A rhetorical question: Which is better, using a higher-level language and shipping 6 months earlier or using a lower-level language and delivering 5 or 10% better performance?


                  (*) C# actually supports vector instructions through Mono.Simd.

                  Comment


                  • #69
                    Honest answer: 6 months later and it's fucking optimized and working well. Sorry to be blunt but I take a well optimized C/C++ engine over any engine written in a managed language any time. As I said: use the right tool for the right task. Number crunching, deep down, hardware close engine stuff is C/C++ while game logic is a managed language. Just because you have horse power to spare doesn't mean you have to squander it. Use it make it run on systems with less horse power or to implement better game play. Many think this is just a game logic task but boy they are wrong. To do the right game logic you need always low level informations and gathering those requires number crunching to get it done fast.

                    Comment


                    • #70
                      Originally posted by Dragonlord View Post
                      The problem with compilers is that they can only analyze an AST. Such an AST does though not contain higher level informations about the the structure of code and how data is accessed. Furthermore the compiler does not now how often code is executed on runtime.
                      That's all true for static compilers, but I was talking about dynamically generated code, like the Java and .NET runtimes. They can compile the code down to a pseudo-assembly and then compile that further down to binary during runtime after checking out what parts of the code are accessed often or any other type of runtime checking. It's conceivable that a program could even try a couple of different strategies, measure which performs better, and save that profile to be used in future sessions, all automatically. Javascript is the same, in that it is interpreted at runtime before being compiled to it's final form (at least for most - Chrome might do it statically).

                      Originally posted by Dragonlord View Post
                      Assembler though is still used today. For example console emulation relies still on assembler to get performance out of the difficult parts simply because they need to spare the few extra cycles introduced by a generic solution a compiler produces.
                      Certainly there are many places where it's almost essential. Codecs and video drivers are prime examples, as well as lots of image/video filters. But it is mostly used when you are trying to optimize small hotspots in your code that get called over and over again, rather than a high-level whole-program optimization type of thing. There just gets to be too much for a human brain to remember at that level.

                      Comment

                      Working...
                      X