Announcement

Collapse
No announcement yet.

Miguel de Icaza Calls For More Mono, C# Games

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by ciplogic View Post
    Would you want to learn C# for making a game? (if the company pays the licenses) than C++. C# is basically free on three platforms, if you take better tools, and around of the price of the IDE in Windows (the price of Visual Studio), you will get the Mono tools for iOS and Android.
    C++ is painful to use it with new features (OpenMP for example is not supported on Android, like described here), the Visual Studio is using Microsoft compiler, the Linux and Android GCC, OS X and iOS would use CLang/LLVM, every with small incompatibilities.
    You have still Lazarus (said in the previous post), if you like Pascal, and you don't want a GC based language. But you cannot use for scripting, but all the rest are fine.
    Originally posted by ciplogic View Post
    allocate most objects on stack
    Originally posted by ciplogic View Post
    I am almost certain you will not face breaks on GC (don't use strings, are unlikely needed anywhere in a game).
    Stack-space is in kilobytes range which includes all recursive subroutine calls. Dont use strings? What you're saing is that if you have a GC, you do not know what your doing anyway, so just dont use memory at all...


    Originally posted by ciplogic View Post
    If you have 5 platforms to support: Windows, Linux, OS X, iOS and Android (to not say WP7, that you have no choice), most likely the answer would be, it makes more sense to use C#
    This sounds good in theory, the VM will take care of platform details and you dont have to worry about anything. But in practice, it's really the other way around.
    1. You have to make sure all libraries you're using in .NET is implemented and fully functional
    2. If you must use anything that is not part of .NET (new HW,new OS features,...) your smoked and has to do special solutions on each platform
    3. For performance problems, you do not have just one setup to work around, you will have different problems on different platforms, just with .NET alone

    THe rule I learned is, the more baggage you drag with you, the more possible problems you face and .NET is the biggest black sheep of them all out there.

    If you really want to be multi-platform, use C and as many standard components as you can. Every platform I know have a well tested and fully functional C library. Of course, on platforms where some components are not available you have to implement additional support anyway. WP7 should be the last thing to implement for since WP8 is going to target C++/js/html5 as their prime platform anyway and .NET will then fade away. The market for WP7 is pretty marginal to begin with and is most likely only people whos only interested in mail, facebook,etc and not 3rd partt applications (if they were they would have gone for iOS or Android).

    Comment


    • Originally posted by Togga View Post
      Stack-space is in kilobytes range which includes all recursive subroutine calls. Dont use strings? What you're saing is that if you have a GC, you do not know what your doing anyway, so just dont use memory at all...
      The reason to not use strings, is practical one. They are slow most of the time, they give a lot of pressure on memory allocator (either is it a GC or not), like in this C++ tutorial. STL string operator += is really very slow and if the code is not written well, would simply make your game to freeze. It also hides heap allocations and as strings can be of multiple lengths, can lead to memory fragmentation, the worst of C++ allocator world.
      Yet is also slow in .Net and for fast build of strings is recommeded to use a StringBuilder, but as is taken in the context of a beginner, maybe it would not use it, and the game would work too slow.
      This is not about to say: never display a string with the score, but to be seen as a performance issue on some devices/platforms.
      The rest I think still remains valid, isn't so?
      You can iterate over strings (that mat be preallocated), but try to not do math with them, a lot of conversions, that can be really the bottleneck in real world applications (and in context of games, could be really bad). And the idea to use stack is that heap is slower, and it expands dynamically in .Net/Mono world to some megabytes (depends in platform still), so you can make fairly good computations with just stack world.

      Comment


      • Sorry for the late reply, I was suckerpunched by the flu (I was soo close to going through an entire winter without one and then bam!)

        Originally posted by BlackStar View Post
        They are great solutions, nothing against them. C# is yet another great solution.
        All solutions have their drawbacks, native code has it's drawbacks, managed code has it's drawbacks, the pro's and con's of the different languages often make them particularly suited for different aspects of computing, however there are also large areas of computing where their pro's and con's have little impact and in those areas it's mainly a matter of preference (as in the language(s) you are comfortable with).

        Originally posted by BlackStar View Post
        In the sense, why would you ever learn any language other than C++?
        You could certainly get by with just C++ (and some assembly) for where you need the performance aswell as for non performance dependant tasks. However, the latter is an area where scripting has always had a strong presence, often as glue code managing data flow between fast native code components. This is primarily how I've used Python at work.

        Originally posted by BlackStar View Post
        Every language has its advantages. I find that C# has a great combination of code expressiveness, performance, supporting tools and documentation. Other languages have these features in different combinations but, as a whole, I find C# ranks among the best.
        Well that's your subjective opinion and obviously I won't argue against that. I find myself fully comfortable with a combination of C/C++ and Python and that too is a subjective opinion, I do like to peek outside my comfort zone from time to time though as I think it's fun and out of the last four or so languages I've tried (OCaml, C#, Go and D) I'd say Go was the only one I'd see myself spending real time learning mainly due to it's concurrency properties which I find interesting and to a lesser extent other things in the language syntax/command repertoire which clicks with me. That said I've been holding off trying something 'serious' with it until the first stable release is out.

        Originally posted by BlackStar View Post
        And not only for XNA. C# has much cleaner OpenGL bindings than Java or even Python.

        Code:
        // C# (OpenTK)
        var data = new float[] { 1, 2, 3 };
        GL.BufferData(BufferTarget.ArrayBuffer, data, BufferUsageHint.StaticDraw);
        GL.CreateShader(ShaderType.FragmentShader);
        GL.Begin(BeginMode.Triangles);
        GL.Vertex(0.0f, 0.0f, 0.0f);
        I'll pretty much chalk this up to preference, granted the GL version prefix in Java is pretty ugly to me aswell but for me personally I find both these slightly verbose for my own taste. Here is how they would look in C/C++ and (I assume given my very limited testing) Go:

        Code:
        C/C++
        
        float data[] = {1,2,3};
        glBufferData(GL_ARRAY_BUFFER, data, GL_STATIC_DRAW);
        glCreateShader(GL_FRAGMENT_SHADER);
        glBegin(GL_TRIANGLES);
        glVertex(0.0f, 0.0f, 0.0f);
        
        Go
        
        var data = [...]float32 {1,2,3}
        gl.BufferData(gl.ARRAY_BUFFER, gl.STATIC_DRAW)
        gl.CreateShader(gl.FRAGMENT_SHADER)
        gl.Begin(gl.TRIANGLES)
        gl.Vertex(0.0, 0.0, 0.0)
        Again this is all a matter of taste and likely depends on what you are used too, like the aforementioned Java example will likely look great for someone who is very used to programming in that language.

        Originally posted by BlackStar View Post
        The problem is that you can't use very valuable features of C99 and C++11 because the compiler support simply isn't there.
        I'd say the most sought after and thus used parts of the C99 standard are supported by GCC (and many of these have been available long before C99 standarized as compiler extensions) and given that GCC is available on just about every platform there is out there then yes, you can use these features. Clang/LLVM is also continously increasing their C99 support, Intel Compiler aswell, in fact the only compiler that I can think of which isn't is Microsoft's Visual Studio and this is likely due to them focusing 100% on C++ with their upcoming C++11 support, along with their 'going native' push for Visual Studio 2012. And there's alot of software out there which requires atleast rudimentary C99 support in order to compile, like x264.

        Originally posted by BlackStar View Post
        The journey is the destination, my friend.
        Indeed my friend!

        Comment


        • Hey, an actual language discussion without a flamewar! Who would expect that on Phoronix?

          Well that's your subjective opinion and obviously I won't argue against that. I find myself fully comfortable with a combination of C/C++ and Python and that too is a subjective opinion, I do like to peek outside my comfort zone from time to time though as I think it's fun and out of the last four or so languages I've tried (OCaml, C#, Go and D) I'd say Go was the only one I'd see myself spending real time learning mainly due to it's concurrency properties which I find interesting and to a lesser extent other things in the language syntax/command repertoire which clicks with me. That said I've been holding off trying something 'serious' with it until the first stable release is out.
          C++ with Python is pretty awesome, indeed.

          Haven't tried Go yet, but I've read its samples and I wasn't too impressed (like I was with Ocaml and its F# offspring, for instance). What is its "killer" feature that would convert people?

          Code:
          C/C++
          
          float data[] = {1,2,3};
          glBufferData(GL_ARRAY_BUFFER, data, GL_STATIC_DRAW);
          glCreateShader(GL_FRAGMENT_SHADER);
          glBegin(GL_TRIANGLES);
          glVertex(0.0f, 0.0f, 0.0f);
          
          Go
          
          var data = [...]float32 {1,2,3}
          gl.BufferData(gl.ARRAY_BUFFER, gl.STATIC_DRAW)
          gl.CreateShader(gl.FRAGMENT_SHADER)
          gl.Begin(gl.TRIANGLES)
          gl.Vertex(0.0, 0.0, 0.0)
          Again this is all a matter of taste and likely depends on what you are used too, like the aforementioned Java example will likely look great for someone who is very used to programming in that language.
          There's one specific problem in this code (both C and Go): lack of error checking by the compiler.

          Code:
          glBufferData(GL_ARRAY_BUFFER, data, GL_STATIC_DRAW);
          glBufferData(GL_FRAGMENT_SHADER, data, GL_STATIC_DRAW);
          The compiler will happily accept both versions and the error will go unnoticed unless you call glGetError() - which most people don't do, since it kills performance.

          That's not the case in the C# version (OpenTK):
          Code:
          GL.BufferData(BufferTarget.ArrayBuffer, data, BufferUsageHint.StaticDraw);
          GL.BufferData(ShaderType.FragmentShader, data, BufferUsageHint.StaticDraw); // compiler error, cannot convert ShaderType to BufferTarget implicitly
          This is actually caused by design limitations of C and C++ in regards to enums (enums are implicitly convertable to numbers and they litter the parent namespace - so noone uses them).

          A proper C++ version would look like this:
          Code:
          GL::BufferData(BufferTarget::ArrayBuffer, data, BufferUsageHint::StaticDraw);
          GL::BufferData(ShaderType::FragmentShader, data, BufferUsageHint::StaticDraw); // compiler error
          I have tried to do just this in glplusplus, but unfortunately it's impossible without C++11 strong enums (which instantly throw portability out the window) or a runtime performance penalty.

          This is actually a case where C# is strictly superior to C++ both in performance and functionality. OpenGL code is a joy to ride in C# (just try it!), which is kind of important for game programming.

          Comment


          • Originally posted by ciplogic View Post
            They are slow most of the time, they give a lot of pressure on memory allocator (either is it a GC or not),
            Okay. It was not obvious that you were talking about specific string helper objects implementation, strings in general are pretty common and hard to avoid. C++11 STL should be pretty decent now with move-semantics and if you at constructor time reserve memory if you know the string is large.

            Comment


            • Originally posted by BlackStar View Post
              C++ with Python is pretty awesome, indeed.
              Interesting. Are you talking boost.python here? Is this working well even across compilers?

              I prefer to interface Python and C since C++ binary format vary between compilers (the code behind the C interface could optionally be C++ though.


              Originally posted by BlackStar View Post
              This is actually a case where C# is strictly superior to C++ both in performance and functionality. OpenGL code is a joy to ride in C# (just try it!), which is kind of important for game programming.
              Not really, given that you use enums instead of defines they can be strictly checked at compile-time in both C++ and C. This is definitely not a scenario for which you drag in a .NET dependency to your project.

              Comment


              • Originally posted by Togga View Post
                Interesting. Are you talking boost.python here? Is this working well even across compilers?

                I prefer to interface Python and C since C++ binary format vary between compilers (the code behind the C interface could optionally be C++ though.
                I haven't tried boost.python because their documentation lists something like Python 2.2 as supported with no mention of modern versions. What I do is extern "C" the public API and consume that instead.

                Btw, there's a very interesting project that uses runtime code generation to bridge the gap between C++ and Mono applications. Subclassing and instatiating C++ classes directly from IronPython? Win!

                Not really, given that you use enums instead of defines they can be strictly checked at compile-time in both C++ and C.
                If only it were that simple.

                1. C++ enums do not introduce a namespace.
                Code:
                enum Foo { Bar }
                This is accessed as "Bar", not "Foo::Bar" which is a nightmare for large APIs with several thousand enums - like OpenGL.

                2. The usual workaround for #1 is to embed an anonymous enum into a struct:
                Code:
                struct Foo { enum { Bar }; }
                It is now legal to type "Foo::Bar", but you instantly lose any compile-time type checking - you can only access Foo::Bar through an implicit cast-to-int.

                3. The workaround for #2 is to use a named enum instead:
                Code:
                struct Foo { enum Values { Bar }; }
                . This works with one caveat: it doesn't cover APIs with enum arithmetic (i.e. most of them). OpenGL, for instance:
                Code:
                // original: void glEnable(int)
                for (int i = 0; i < 8; i++) {
                    glEnable(GL_LIGHT0 + i); // no type-checking
                }
                
                // C++ enums: void Enable(EnableCap::Values)
                for (int i = 0; i < 8; i++) {
                    GL::Enable(EnableCap::Light0 + i); // error: cannot convert int to EnableCap::Values implicitly
                    GL::Enable((EnableCap::Values)(EnableCap::Light0 + i)); // works, but ugly and not discoverable
                
                // C#: void Enable(EnableCap)
                for (int i = 0; i < 8; i++) {
                    GL.Enable(EnableCap.Light0 + i); // beautiful
                }
                4. The workaround for #3 is to introduce a templated type that supports the operations we need without sacrificing type-safety and readability. The downside? (a) Need to write more code and (b) enum values are not constants, but are allocated on the stack instead. A good optimizing compiler should be able to eliminate the performance penalty, but we are now entering the dark magic zone - and all this just to get a clean, type-safe API design.

                In fact, there was a boost discussion on this very issue, looking for potential solutions. C++11 solves most of these issues, but the compiler support just isn't there yet.

                Comment


                • Originally posted by BlackStar View Post
                  Btw, there's a very interesting project that uses runtime code generation to bridge the gap between C++ and Mono applications. Subclassing and instatiating C++ classes directly from IronPython? Win!
                  That's no win at all. I can't see any purpose .NET would fill in this contest. Just lots of unnecessary baggage.


                  Originally posted by BlackStar View Post
                  1. C++ enums do not introduce a namespace.
                  Code:
                  enum Foo { Bar }
                  This is accessed as "Bar", not "Foo::Bar" which is a nightmare for large APIs with several thousand enums - like OpenGL.
                  Code:
                  namespace xxyy {
                      enum zzz {
                          bbb;
                      };
                  };
                  is accessed xxyy::bbb;
                  Code:
                  class C {
                      enum bbb { ccc };
                  };
                  This is generally a no-issue since you can always do namespaces with prefixes.. And you can make your code checkers enforce this.

                  Code:
                  enum zzz {
                     zzz_bbb = 10;
                  };
                  Originally posted by BlackStar View Post
                  // C#: void Enable(EnableCap)
                  for (int i = 0; i < 8; i++) {
                  GL.Enable(EnableCap.Light0 + i); // beautiful
                  }
                  [/code]
                  Your C#-code isn't particularly beautiful and dragging in the whole .NET framework for code syntax checks is incompetent at best. This example is easy to do acheive with a little class in c++ if you ever should want to.

                  Comment


                  • Originally posted by Togga View Post
                    Your C#-code isn't particularly beautiful and dragging in the whole .NET framework for code syntax checks is incompetent at best. This example is easy to do achieve with a little class in c++ if you ever should want to.
                    So your point is that: is it easy to... but practice shows that the C++ headers of OpenGL are not that "competent". And at the end you drag a lot of things either way: the OS libraries are in a lot of cases a lot of "baggage". Memory management, virtual memory allocation, dynamic library mapping (sometimes even with remapping of a library to another library, that's why there is WinSxS folder and is the biggest folder from your modern version of Windows). The RTTI (which is minimal at best) can be simulated fairly easy just with some macros (MFC did this), or compiler extensions (ex-Borland now Embarcadero did that), or an meta-object compiler (in Qt). The same is about a lot of other things that people are used to get for granted in a VM world, and is fairly lacking in the C++ part: good tools of static analysis code (Eclipse and Netbeans have primitive yet functional for Java language, or if you want a better tool you would want to use IDEA, C# world have Resharper, CodeRush and JustCode), great designer (as far as I used there are just two visual designers in C++ world that would match a decent usage, maybe 3: QtCreator, C++ Builder and WxDesigner!? ). In .Net/Java world they are much more consolidated and you know that people have skillset with WPF, it can be used with Silverlight to some extent.
                    At the end, people lived before OOP, so why do we need virtual table with pointers to be decorated with classes? We lived before there was any graphic drivers and we wrote pixels directly in video memory, or we were using "uses BGI; " world and we were flipping parts of the screen to simulate animations. We lived even before C with "types", and we were using assembly. Assembly have close to no baggage (http://flatassembler.net/download.php )
                    I think that C# is as good as it is a decent language to use, that in some ways got matched with some C++11 changes. When I go to C++ I notice features that are missing: concepts (in C# I am used to write Generics + constraints, are really great to use), full RTTI, (maybe) a GC, dynamic dispatch. It would also not hurt if the compilation of C++ code would be faster so big projects can iterate much more often, to have more or less standardized UI (maybe on top of Qt). I don't understand the idea that if for a loop C++ works (supposedly every time) 2 times faster than C# (or Java), to change all code to use C++ as most of applications would not notice the difference.
                    AngryBirds was (at least originally) written on Android in Java/Android and was good enough for many (at least from the moment that a JIT was a part of Android platform).
                    I used Ruby and if it would be a 20-30% slower than .Net, I would use it straight away (there is, and I plan to use it, is name Mirah, but I have sometimes quirks compiling it). If is to compare a better language, Ruby would be likely be it, maybe Python. C++ can do some things that C# can do, and most of things that Ruby does, C# does too, but Ruby does it with a compactness and usability in mind. This is where C++ always suffered: made from engineers to engineers. Ruby was made not by a programmer and it shows!
                    Last edited by ciplogic; 27 February 2012, 04:45 AM.

                    Comment


                    • I think that C# is as good as it is a decent language to use, that in some ways got matched with some C++11 changes. When I go to C++ I notice features that are missing: concepts (in C# I am used to write Generics + constraints, are really great to use), full RTTI, (maybe) a GC, dynamic dispatch. It would also not hurt if the compilation of C++ code would be faster so big projects can iterate much more often, to have more or less standardized UI (maybe on top of Qt). I don't understand the idea that if for a loop C++ works (supposedly every time) 2 times faster than C# (or Java), to change all code to use C++ as most of applications would not notice the difference.
                      (R)amen!

                      Do note that C++11 paves the way for full GC implementations in later versions. Right now it says that orphan objects may be automatically reclaimed by a GC (left to the implementation) and a future iteration will specify the exact GC behavior.

                      I would also love to see a proper module system. This was discussed heavily for C++11 but, in the end, they decided to postpone it for a future version - it's a complicated SoB to design. Not only would it reduce the insanity of the compiler, which compiles the every single templated class n-times (every time you use it), but it would also improve code reuse, interoperability, modularity and compilation times. The lack of a module system is by the far the biggest issue with C++ right now.

                      At the end, people lived before OOP, so why do we need virtual table with pointers to be decorated with classes? We lived before there was any graphic drivers and we wrote pixels directly in video memory, or we were using "uses BGI; " world and we were flipping parts of the screen to simulate animations. We lived even before C with "types", and we were using assembly. Assembly have close to no baggage (http://flatassembler.net/download.php )
                      Not only that, but I am old enough to remember the same arguments being used against C++: Who needs C++ when you have C? C++ is bloated crap (500KB for a simple hello-world program!) C produces faster code. C compiles faster. It doesn't have an ABI. You can always write OO with C structs and function pointers, so why drag the whole C++ garbage just for that? Besides, virtual functions are too slow. Blah blah blah.

                      Funny how these people are always proven wrong in the end.

                      Comment

                      Working...
                      X