Announcement

Collapse
No announcement yet.

Miguel de Icaza Calls For More Mono, C# Games

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by XorEaxEax View Post
    Not really following this, you can create a macro which reports any glError codes during runtime together with which source file and line number the error took place on. You can also make it so that the checking can be disabled and thus not have any performance impact on final builds. Something like this:

    Code:
    #ifndef _GLERROR_H_
    #define _GLERROR_H_
    
    #include <stdio.h>
    #include <GL/gl.h>
    
    #define _GLERROR_ENABLED_ // comment out to disable glGetError() checking
    
    #ifdef _GLERROR_ENABLED_
    #define GLERROR() { int foo = glGetError(); if(foo != GL_NO_ERROR) printf("GLError:%d in file:%s at line:%d",foo,__FILE__,__LINE__); } 
    #else
    #define GLERROR()
    #endif
    
    #endif
    This would catch faulty parameters during runtime, giving you the file and line in which they occured and not force you to wait for some tester to report some texture bug. Granted it's nicer if the compiler catches the error but it's not something I would switch language for.
    (a) This relies on the programmer to insert GLERROR() calls at proper places.
    (b) This only detects errors at runtime, even though they could be detected by the compiler.

    These issues might not matter for trivial applications. But what if you are developing a non-trivial game and the faulty code is only executed near the end of level 5, when the player tries to enter a non-essential secret area? It might be weeks before a tester encounters and reports the issue!

    Bugs like this do happen and do go unnoticed (ever played any of the Elder Scrolls series?) The larger the application, the higher the chance of obscure bugs, and the higher the value of compile-time error checking. This is precisely the reason we have moved from assembly to C, to C++ and to other things.

    Btw, C#/OpenTK not only detects errors at compile-time, it also inserts GL.GetError() calls automatically when running in debug mode. This is a huge safety net that you can't really appreciate before you use actually use it. OpenGL is much smoother in C# than in any other language I've ever used.

    Comment


    • Originally posted by BlackStar View Post
      Not necessarily. You seem to be confusing the .Net BCL with the .Net VM.
      No. I'm not confused by this. I know I have to port everything I use including the VM and all libraries which might be a subset of .NET. The VM overhead to begin with is just not justifiable for some syntax sugar that I can get anyway if needed. .NET platform itself does not have any valuable thing to offer here, may it be 2Mb or 20Mb. Besides this, we already seen that C# is more verbose than necessary for GL calls. Here syntax doesn't even appear to be it's strong side. It's then a lose-lose situation.

      Comment


      • Originally posted by Togga View Post
        No. I'm not confused by this. I know I have to port everything I use including the VM and all libraries which might be a subset of .NET. The VM overhead to begin with is just not justifiable for some syntax sugar that I can get anyway if needed. .NET platform itself does not have any valuable thing to offer here, may it be 2Mb or 20Mb. Besides this, we already seen that C# is more verbose than necessary for GL calls. Here syntax doesn't even appear to be it's strong side. It's then a lose-lose situation.
        The using OS overhead to draw things is not justifiable. Find an OS to not occupy today at least 100 MB of RAM and 1.5 G of hard (this is XP) or 400 MB of RAM and 10 G of HDD (Vista/7).
        Verbosity as looking for safety, you just pawned yourself by showing that enums are more verbose to solve the very some problem.
        If you mean verbosity in typcal code: C++ you have to declare a lot of things twice, and even an empty class is longer in C++:
        class Class {} //C# code
        class Class {}; //C++
        If you mean that using is longer that #include, seems not to be the case, if my counting helps.
        If you mean to use type inference from C++11, auto is longer than var.
        So where is shorter? class A {public: void test(); } ; void A::test() { } is longer for me than: class A { public void test() { } }

        Maybe some STL would save us:
        Code:
        int myints[] = {32,71,12,45,26,80,53,33};
          vector<int> myvector (myints, myints+8);               // 32 71 12 45 26 80 53 33
          vector<int>::iterator it;
        
          sort (myvector.begin(), myvector.end());           //(12 32 45 71)26 80 53 33
        Comparing with what C# offers:

        Code:
                var myints = {32,71,12,45,26,80,53,33};
        //default (using generics):
        Array.Sort( myints );
        //Linq
                // Query for ascending sort.
                var sortAscendingQuery =
                    from item in myints orderby item select item;
        And to not forget, because you have a GC, you don't have to write destructors in most cases, isn't so? Don't you enjoy to write them?

        At the end you appear to not be consistent with your values, because you seem to appear to accept to use an inneficient hard disk data format (like using ini files or Xml), that would give to you many times slowdowns, than to use something that brings 50% of assembly speed. And if you do programming for real, maybe you would notice that disk operations are orders of magnitude slower than ever Mono could be. Mono may be two times slower, it may use some MB. How many? Not so many...

        A Hello world in Mono? Some KB, under 10. A hello world in C++, give to me your numbers. Count them in many applications, and you will think twice on saved space... seems you are so concious to save disk space: start with C++, is a great place to start shed off

        "I have to port everything I use including the VM and all libraries which might be a subset of .NET" Really? Do you have to port the VM?
        It sounds like you are porting the compiler every time you change the computer. I pitty you for bad life you have... but most people work with mainstream compilers and platforms
        I warmly recommend Mono/.Net
        Even if you use Ubuntu/Suse/Windows XP/Vista/7 you will have support for a big set to work at least with files, console and web to make tools. You don't need to recompile a thing, is all done by Microsoft and Xamarin engineers and other contributors.
        Last edited by ciplogic; 29 February 2012, 08:27 PM.

        Comment


        • Ciplogic, why are you even replying to Togga? It's obvious he's trolling, and badly at that, too.

          Here is something interesting that caught my eye today:
          Originally posted by https://code.launchpad.net/~vanvugt/compiz-core/fix-940139/+merge/94715/comments/204303
          The performance implication is huge. Compiz was spending most of its time in malloc/free, which was mostly due to "new PrivateRegion" and "delete PrivateRegion". So PrivateRegion had to go. Although after this fix most of compiz' CPU usage is still due to excessive malloc/free/new/deletes, at least it's no longer mostly due to CompRegion.
          This goes to show that code with inefficient memory management will perform slow in any language. Not only that, but the relevant optimizations are pretty much identical in all languages: remove memory allocations from the fast path.

          Comment


          • Originally posted by BlackStar View Post
            (a) This relies on the programmer to insert GLERROR() calls at proper places.
            (b) This only detects errors at runtime, even though they could be detected by the compiler.
            Yes, I wasn't arguing against that, this was directed at your statement of people not using glGetError() due to performance loss, I just wanted to show that with a macro you easily enable/disable this check and thus easily avoid the performance impact when you know the code is functional.

            Originally posted by BlackStar View Post
            These issues might not matter for trivial applications. But what if you are developing a non-trivial game and the faulty code is only executed near the end of level 5, when the player tries to enter a non-essential secret area? It might be weeks before a tester encounters and reports the issue!
            Well I can't say I've done much game programming (and even less opengl programming), in fact it was many years since I did anything remotely as such, that said I can't imagine a game where you'd write special opengl code for a secret area. As I see it you write an engine which handles all drawing, and that engine is thoroughly tested during development, a secret area just like any other part of the game is defined using tools (map editor, event editor, etc) which in turn access the drawing engine using a higher level interface and not by anyone hacking together specific opengl code for that part of the game.

            Originally posted by BlackStar View Post
            This goes to show that code with inefficient memory management will perform slow in any language. Not only that, but the relevant optimizations are pretty much identical in all languages: remove memory allocations from the fast path.
            Which proves nothing other than that you can write slow code in any language, it's how fast you can get your code which is of interest. Well written native code beats well written managed code in speed/resource usage, there's really no point in arguing this.

            In areas where performance is very important managed code is indeed absent (compression, encoding, rendering, etc) due to it's performance penalities. Obviously this is not news to anyone who understands the difference between managed and unmanaged code, if managed code had it's advantages without the corresponding performance/resource usage penalty then we'd all be using managed code for everything these days. However these performance issues are real and computing power is not infinite. For a large array of problems the performance penalty makes no difference and here C#, Java, together with lots of scripting languages are popular solutions, like in the enterprise sector where once VB reigned supreme.

            Personally I enjoy both sides of the spectrum, I like the ease/speed of development in languages such as Python and I also like the low level power and flexibility of languages such as C/C++, and in particular I like how I can combine them.

            Comment


            • In areas where performance is very important managed code is indeed absent (compression, encoding, rendering, etc) due to it's performance penalities.
              You are correct here, managed code is indeed absent in compression and encoding. However, the story is a little more complex. Code like:
              Code:
              a = b + c;
              or
              Code:
              foreach (var a in items)
                  // do something
              
              for (std::vector<foo>::const_iterator it = items.begin(); it != items.end(); it++)
                  // do something
              perform identically between C# and C++. There's no inherent disadvantage in a statically-typed managed language vs a statically-typed unmanaged language.

              So why don't you see an h.264 compressor in C#? Two reasons:
              (a) compressors and encoders tend to be released as reusable libraries, which means C (because the C is the only globally-understood ABI. C# and Python can both use C - but not vice versa.)
              (b) SIMD optimizations.

              While something like:
              Code:
              a = b + c
              may perform identically between C# and C++, it can often be made 4x faster via 128bit vector instructions. Which is why you will often see hand-rolled assembly in the core of such libraries, rather than C++.

              In other words, the distinction here is not actually managed vs unmanaged, it is high-level vs low-level. The lower the level, the more micro-optimizations you can make - and these are what make the difference in tasks such as compression and encoding.

              Edit:

              Well I can't say I've done much game programming (and even less opengl programming), in fact it was many years since I did anything remotely as such, that said I can't imagine a game where you'd write special opengl code for a secret area. As I see it you write an engine which handles all drawing, and that engine is thoroughly tested during development, a secret area just like any other part of the game is defined using tools (map editor, event editor, etc) which in turn access the drawing engine using a higher level interface and not by anyone hacking together specific opengl code for that part of the game.
              Pretty much. However, it could be that a specific combination of (invalid) OpenGL states only appears at a specific location in the game. Or it could be that that area contains an asset with a problematic shader. It happens, and my point was that the more you trust your compiler the better.

              In fact, that's why the WebGL spec goes to such lengths to allow static verification of the code. It throws away pretty much all OpenGL ES parts that are not statically verifiable - and Google has built a WebGL compiler that checks your code and refuses to execute it when it detects anything unverifiable. (I'm pretty sure Mozilla is using the same compiler, at least on Windows). This is pretty much impossible in desktop OpenGL, where the default headers are full of untyped enums (ints) and void* casts. (Even the "hardened" headers I am using have issues due to the schizophrenic OpenGL design - but that's another discussion entirely.)
              Last edited by BlackStar; 01 March 2012, 08:11 PM.

              Comment


              • Originally posted by BlackStar View Post
                You are correct here, managed code is indeed absent in compression and encoding. However, the story is a little more complex. Code like:
                Code:
                a = b + c;
                or
                Code:
                foreach (var a in items)
                    // do something
                
                for (std::vector<foo>::const_iterator it = items.begin(); it != items.end(); it++)
                    // do something
                perform identically between C# and C++. There's no inherent disadvantage in a statically-typed managed language vs a statically-typed unmanaged language.
                And in fact sometimes it performs even better.
                Note: thanks for notice me about Togga Regards and have great programming time.

                Comment


                • Originally posted by XorEaxEax View Post
                  In areas where performance is very important managed code is indeed absent (compression, encoding, rendering, etc) due to it's performance penalities.
                  In fact I want to fix just one topic: encoding and rendering (even rendering may work on Java runtime as SunFlow) are really CPU dependent, yet compression/decompression is a different matter.
                  Compression is written dependent by multiple dependent performance parts, like IO (most of the times), a branchy code (which depends on CPU branch predictions), it has multiple variables (so it depends on register pressure), and CPU cache saturation, and most of the times is hard to parallel them. If you take DotNetZip (http://dotnetzip.codeplex.com/) and you use compare to compress a big archive (I tried a Fedora ISO under Windows), you may be surprised that the compression runs faster from Mono (on 32 bits) than a Total Commander (32 bits). This it may be just because Total Commander uses Delphi which is known to not be *that* efficient on generating code. Also as JITs do most lilkely compile a smaller portion than all code paths do, they would likely fit better in CPU cache, so maybe just a combination of cache misses + a bit better code generator may make JIT code faster than a native compiler.
                  The difference is even bigger with using the same DotNetZip if you use the default settings. As most applications are compiled with 32 bits (as they need to run on most Windowses) when in .Net if you know that you don't interact with 32/64 bit world, you will get a 64 bit version where you have more registers to spare, so you will get even faster generated code.
                  In fact most Java tools work with JAR (which are decorated Zip files) which may work faster than the full access of the .class files as CPUs compensate the lower speed on rotating or normal disk. So they may be even faster than plain files or the impact is too little the SSD drives (like described here).

                  Comment


                  • Originally posted by BlackStar View Post
                    Ciplogic, why are you even replying to Togga? It's obvious he's trolling, and badly at that, too.
                    Defenitely not. But it's easier to just call people trolls than try to understand what they say...

                    Comment


                    • Originally posted by ciplogic View Post
                      The using OS overhead to draw things is not justifiable. Find an OS to not occupy today at least 100 MB of RAM and 1.5 G of hard (this is XP) or 400 MB of RAM and 10 G of HDD (Vista/7).
                      Well, there are always busybox/<some-embedded-os> combos out there. Pretty much all component choises means restrictions in some way. I think C gives me the least restrictions, everything outside that I choose carefully unless I just want to experiment, have limited scope, something to test or educate.

                      Originally posted by ciplogic View Post
                      If you mean that using is longer that #include, seems not to be the case, if my counting helps.
                      Your correct. But my point with syntactic sugar is that I rather add a few bytes extra code than use unnecessary dependencies for production software.


                      Originally posted by ciplogic View Post
                      And to not forget, because you have a GC, you don't have to write destructors in most cases, isn't so? Don't you enjoy to write them?
                      If needed, to use GC in C++ you can always do something like
                      Code:
                      gc_ptr<Class *> ptr = new Class();
                      You can in fact implement any memory layout this way and do not have to give up (the syntactic sugar for) multiple inheritance in the process (which is a big loss for a language that should be object oriented). The same thing can be achieved in pure C.


                      Originally posted by ciplogic View Post
                      At the end you appear to not be consistent with your values
                      I am consistent with my values, which is to be as flexible as possible. Maybe I use a hard drive today, but I can easily move my code to a pure RAM-based solution, etc. To use Mono for a few lines of code and some syntactic sugar is not even on the table (unless it's for fun or something other outside the professional sphere). As I mentioned earlier, there are plenty of other options if you want neat syntax, but even that is not main priority for me.


                      Originally posted by ciplogic View Post
                      A Hello world in Mono? Some KB, under 10. A hello world in C++, give to me your numbers. Count them in many applications, and you will think twice on saved space... seems you are so concious to save disk space: start with C++, is a great place to start shed off
                      Again, you're missing my point. If disk-space are tight, we're probably in an embedded device. Lets say we have an RTOS with a C-library. What consumes more space, a hello world in C or a hello world requiring the Mono stack to go along?

                      For me Mono and .NET really have nothing to offer. Couldn't you prove me wrong with something other than to save a few characters of code? Nobody gets happier than me if you do, since then I have learned something.

                      And yes, I have programmed in C# for a few years (for work besides curiosity projects to learn new things), C# customers are out there also.
                      Last edited by Togga; 02 March 2012, 04:07 PM.

                      Comment

                      Working...
                      X