Announcement

Collapse
No announcement yet.

Miguel de Icaza Calls For More Mono, C# Games

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by XorEaxEax View Post
    I have no problem with people writing code in C#, I have no problem with people writing games in C#, I did argument against BlackStar claiming that C# was _particularly_ good for writing games, while also citing performance as one of the benefits. Mono by everything I've seen is not fast relative to other languages used for writing games, the garbage collector requires workarounds in order to assure smooth framerates. In short as I've said before, if you are a C# programmer then by all means why not use Mono to write your games if the performance is adequate for your project, however I can't see why anyone wanting to write games should choose to learn C# in order to do so unless they directly target XNA.
    C# has a key advantage over C++: it is the only language that can target all three major mobile platforms: Android, Iphone and WP. If you choose C++, you lose WP and if you choose any other language you pretty much lose both Iphone and WP.

    I do not particularly like this situation (I'd prefer to use Python on all platforms) but until this changes, C# is the only way to support all platforms from a common codebase.

    As far as performance is concerned... unless you are an AAA studio with tons of funding, you won't be able to take advantage of the extra performance of C++ over C#. Why? Because (a) the most important concern will be getting your product into a shipping shape, and (b) micro-optimizing in C++ is the last thing you should be doing while trying to ship a product in time. Unoptimized C++ is really not the most performant thing in the world and it is very easy to blow performance there (and I have seen this time and time again by non-expert programmers).

    Wringing this extra 10%-50% performance out of C++ can take time you simply don't have. Hell, even AAA studios release unoptimized messes to ship in time, nowadays, so it's not as if the choice of language is the deciding performance factor anymore.

    Comment


    • Originally posted by BlackStar View Post
      (a) If the VM runs out of space, it asks the OS for more memory. This is exactly as fast as a plain calloc.
      How is that supposed to work? The VM heap is a continous chunk of memory (or else it would certainly never be able to just allocate by moving a pointer, as is the best case scenario), so in order to get more memory from the host OS it would have to issue a realloc, and if the realloc can't just extend the memory block (in other words, if there something already allocated there) then the entire VM heap would have to be moved to a place in ram where there is space for resized VM heap.

      Originally posted by BlackStar View Post
      (b) Memory fragmentation does not decrease GC performance, it merely increases memory use. The GC does not scan for empty blocks ala malloc (which loses performance with fragementation), it really just increases the end pointer (which is O(1)), regardless of the existence ofo blocks.
      Which means it will fragment the heap very quickly, which will result in a compaction, or even having to request a bigger heap from the host OS.

      Originally posted by BlackStar View Post
      (c) The .Net GC does not work like your common C++ GC, it is much more invasive and can do things that are simply not possible in the unmanaged world. For instance, the GC can move stuff around in memory and update all relevant references.
      Which is still tied to a cost, and is again a cost that only comes with managed memory handling and the problems it tries to solve.

      Originally posted by BlackStar View Post
      Since this is a low-priority thread, it doesn't actually impact performance! If there is enough CPU time, it runs; if not, it doesn't run until the OS scheduler gives it a timeslice - the worst that could happen is that the application uses a little more memory than it could.
      Of course it does, it all depends on the application and how performance dependant it is, for software in non-constrained environments for which performance is not a major issue then I agree it won't make a huge difference.

      Originally posted by BlackStar View Post
      Contrast this with C++, which will always fragment memory when you new/delete objects. In the extreme case, this will cause the application to run out of memory and die.
      This is a very real-issue for long-running applications and apps running in constrained environments (like a console). Solving this either requires lots of code or so-called "low-fragmentation" heaps (which also decrease performance (by doing double-indirections, in order to support compacting).
      No it will not always fragment memory just like the GC managing it's own heap will not always fragment memory (although it's much more likely since it generally has a much smaller heap size than what is available to the host OS). And if there's the danger of that (very constrained memory space) then there are other solutions like recycling allocations rather than making new ones, maintaining your own pool better tailored for the data in question, obviously not as simple but if you are operating in such a constrained setting then you are aware of this. And running managed code under the same circumstances would be just as bad and likely worse.

      Also, you realize that (atleast on Linux, but most likely on many other systems aswell) the heap allocator isn't just some dumb function running through a list of available blocks, it allocates similarly sized blocks next to eachother which greatly diminishes fragmentation. And unlike a managed heap the OS has all the free memory in the system in which to place these memory partitions (called slabs iirc), much unlike the constrained managed heap which needs to do compaction in order to combat fragmentation or ask the host OS for a heap resize should that not be enough.

      Originally posted by BlackStar View Post
      (d) not only that, but the GC can (and does) keep long-lived objects separate from short-lived objects, which further reduces fragmentation. This is a huge optimization you get for free.
      See above.

      Originally posted by BlackStar View Post
      (You could try to emulate this in an unmanaged language through pool allocators, but this requires deep magic, lots of fragile code and time - that most people would prefer to spend on actual game code. And the end result will be less performant than a proper GC.)
      Why would I? If I were to manage my own memory pool then it would be tailored for my purpose and thus extremely efficient, much more efficient then anything the GC could offer, you would have to do the work yourself but given that efficiency would be the only reason to manage your own heap/pool in the first place then the resulting performance would make up for the extra work involved.

      Originally posted by BlackStar View Post
      C# has a key advantage over C++: it is the only language that can target all three major mobile platforms: Android, Iphone and WP. If you choose C++, you lose WP and if you choose any other language you pretty much lose both Iphone and WP.
      Hmm... while WP is the third largest platform I'd say it's in effect the third platform after the two LARGE platforms, both of which runs C++ applications so I wouldn't call that a 'key advantage' just yet.

      Originally posted by BlackStar View Post
      Unoptimized C++ is really not the most performant thing in the world and it is very easy to blow performance there (and I have seen this time and time again by non-expert programmers).
      Of course not, but same goes for unoptimized (as in poorly written) code in any language, point is that a C++ program doing performance demanding tasks will certainly be much faster than a C# equivalent written by an equally skilled programmer. I know my C code runs circles around my Python code, and I don't think I'm worse at Python (well, atleast not THAT much ) than I am at C.

      Is my memory playing tricks on me or did you and I have pretty much this same discussion here sometime before? If so did we get anywhere in our discussion?

      Comment


      • Originally posted by XorEaxEax View Post
        Last time I checked, Paint.Net relied on the native code GDI+ library to handle the heavy lifting. Also are there any benchmarks which compare Paint.Net with for instance Gimp? (not that I think a paint/photo edit program is in any way a good candidate for benchmarking)
        (...)
        V8 is a javascript engine which is written in C++ and compiles Javascript into machine code before running it, how this example would benefit your argument is beyond me.

        I have no problem with people writing code in C#, I have no problem with people writing games in C#, I did argument against BlackStar claiming that C# was _particularly_ good for writing games, while also citing performance as one of the benefits. Mono by everything I've seen is not fast relative to other languages used for writing games, the garbage collector requires workarounds in order to assure smooth framerates. In short as I've said before, if you are a C# programmer then by all means why not use Mono to write your games if the performance is adequate for your project, however I can't see why anyone wanting to write games should choose to learn C# in order to do so unless they directly target XNA.
        As I did not not had the code available for Paint.Net, I cannot say if the filters are written in .Net (C#) or in C++ (even it was suggested that some parts were rewritten in C++, so an extra performance may be existing from C++ side), yet I could test Gimp against Pinta, and as Pinta as codebase is known to me, I really recommend to you to create a picture like 1024x1024, and apply a radial blur, and you may notice that Pinta goes faster than Gimp. In fact it may go slower, depends on the machine, but on a dual core machine (i5-540M, Win7/64 bit) Gimp (32 bit) will finish to apply the filter in like 7 seconds, and Pinta in around 4, and Paint.Net it runs like in 3 seconds, yet Gimp uses just one core, when Pinta uses all cores (are visible as 4 cores because of HT). I think this is fairly important for many reasons to notice, because some may argue that the scaling of performance is not linear, and the .Net is on 32 bit, when 64 bit would lead a better performance (Paint.Net would give like 2.5 seconds, but I don't know if is because is 64 bit JIT, or is because is rewritten in C++, but Paint.Net also uses all cores). So even my testing is doggy and anecdotal, certainly disproves the premise that Gimp is the fastest. Even we would consider perfect performance scaling, we talk about 4 sec x 2 cores against 7 seconds of 1 core (so C++ would offer a 15% speedup). I do understand that I don't compare apples with apples, because Gimp may not be optimized for radial blurs, and in a similar way, maybe is where Pinta would get the best performance.
        In fact this is really what reflected my --llvm backend usage (I said about earlier), when I get even lower speedup with using a C++ high quality code generator of LLVM. Real life code works really good in Mono, maybe not so good in some cases, but if it would be about me, I would use C# for most of the code, and maybe just the last 10%, if this would be critical I would use C++. But as life is much often looking for the 90%, C++ I see it less and less relevant. Not for Unreal Tournament 4 maybe, but for most real life programmers. And games, are dependent on more than the CPU code generator (and 20% is an insignificant time), but is also dependent on code-size (because the CPUs have smaller caches on phones), so if not all code is generated, but just a small part, may run faster on a limited device than the full version that is swapping), but also on other parts like: does it use well the GPU? Android 3 and 4 offers UI that is accelerated on GPU, by just using this, the performance will go up, by freeing the CPU for doing the calculations.
        To write parallel code in .Net is simply as finding a loop with no dependencies, and do a Parallel.ForEach( enumeratorCollection, () => { Code in loop } );, so maybe that's why a lot of code that appeared before the multi-CPU craze was ready to be applied to real code. In fact if you have a modern version of Mono/.Net you don't even need to include a #include OMP, and to write #pragma directives, and to have #ifdef compiter_supports_openMP ... which is at best funky, to not say that will take some time until Gimp would use it.

        Comment


        • Originally posted by XorEaxEax View Post
          Hmm... while WP is the third largest platform I'd say it's in effect the third platform after the two LARGE platforms, both of which runs C++ applications so I wouldn't call that a 'key advantage' just yet.
          Both Bada and WinMo beat WP in installed base. So it's the fifth or sixth platform, it's possible even LiMo beats it. Are we speaking of different stats?

          Comment


          • Apparently WP8 is supposed to support C++ (finally!), so scratch that advantage.

            @XorEaxEax, this is a recurring discussion happening roughly every 6 months. We don't really disagree in essence, C++ is a faster language than C#. What we disagree on is the conclusion: C++ is faster than C#, so C# can't be used for games. Ciplogic and I tried to argue why we believe this is wrong.

            In short:
            1. the existence of the GC does not preclude smooth performance. Essential memory management optimizations apply equally well to C# as to C++ (i.e. stack allocations, memory pools). Memory mismanagement will cause problems in both cases (i.e. frequent new/delete objects during the game loop).

            2. C++ can achieve better performance than C# given proper optimization, sometimes significantly so. However, it is easier to achieve adequate performance in C# due to many factors, an important one being the ease of parallel processing (Parallel.ForEach and the Task Parallel library, the new 'async' primitives). Implementing the same optimizations in C++ requires significantly more effort (OpenMP, etc etc).

            3. Indie games are usually GPU-limited not CPU-limited. By far the most CPU-intensive part is physics, and you can always use a C++ library for that. No reason to implement your whole game in C++!

            And a final point I haven't raised till now:
            4. Recompiling C# code on a different platform is much easier than porting C++ code. Anyone who has had to port between Linux/GCC, VisualC and Mac (either compiler) will readily atest to this. The more compilers you add and the more features you use, the worse the situation becomes (use C++11 and you are SOL). C# is a walk in the park in comparison, which is a pretty nice for cross-platform games.

            Comment


            • Not to burst anyone's ideological open-source bubbles here but using C# or Vala (if you use the mono for C# or glib for Vala implementations) means LGPL in your codebase which is usually a non-starter due to these platforms not supporting dynamic linking only static linking so you'd have to source out exceptions for when you port to iOS or embedded game consoles as most game code is self-contained and are better suited for BSD/MIT/ZLIB style licenses. You need to purchase monotouch for a commercial license, you can't just embed the mono stack into your app yourself statically sadly without falling into the LGPL for your other parts of the codebase.

              I'd like to be proven wrong (can I use Vala when compiled in posix mode only without falling under the LGPL? That would rock!), but this is the reason why Ogre and Bullet (hell even SDL 1.3 moved to ZLIB) are under these more liberal licenses, it's more practical for when you want to port your game to consoles/embedded platforms. I'm sticking to my own C++ codebase with SDL and Lua for light scripting C++ game objects like everyone else. WinMo 8 I'm guessing will have P/Invoke so you'd just treat it like Android, JNI + NDK, static compiled C/C++ still wins.

              -MistaED

              Comment


              • Originally posted by MistaED View Post
                Not to burst anyone's ideological open-source bubbles here but using C# or Vala (if you use the mono for C# or glib for Vala implementations) means LGPL in your codebase which is usually a non-starter due to these platforms not supporting dynamic linking only static linking so you'd have to source out exceptions for when you port to iOS or embedded game consoles as most game code is self-contained and are better suited for BSD/MIT/ZLIB style licenses. You need to purchase monotouch for a commercial license, you can't just embed the mono stack into your app yourself statically sadly without falling into the LGPL for your other parts of the codebase.

                I'd like to be proven wrong (can I use Vala when compiled in posix mode only without falling under the LGPL? That would rock!), but this is the reason why Ogre and Bullet (hell even SDL 1.3 moved to ZLIB) are under these more liberal licenses, it's more practical for when you want to port your game to consoles/embedded platforms. I'm sticking to my own C++ codebase with SDL and Lua for light scripting C++ game objects like everyone else. WinMo 8 I'm guessing will have P/Invoke so you'd just treat it like Android, JNI + NDK, static compiled C/C++ still wins.

                -MistaED
                I think that no one disputed that C# is not the fastest, the issue I think is simply: fast-enough. When Android 2.2 appeared with a JIT, a LOT of indie games did not bother to use JNI. The presentation of how JIT was made (a Google IO talk: http://www.youtube.com/watch?v=Ls0tM-c4Vfo ) explains why: their JIT was a bit simplistic, but good enough. They also explained how the code grow is more important for limited memory devices, and that the Dalvik bytecode is at least 6x smaller in memory than the full ARM code that it replaces.
                I think we all agree that C++ is the fastest, for maturity and the capability of C++ to perform more optimizations, including a better register allocator (which is expensive to do it on runtime by JITs, that's why LLVM is "too slow" for JIT purposes), loop unrolling (which is great on paper, but JITs that do it, they do it after it found that is worthwhile (I mean on Java -server) as it expands the code and will mean even more register pressure sometimes, and will increase also code cache pressure).
                But I am not sure about you, but most games I am used to play on my Android device (most of them are casual), they could run just fine with Dalvik JIT, and at least on that specific device, the limitation seem to be the "GPU" (I put between quotes as is a bit underpowered). If supposedly the game I play renders like 60% of game time waiting for GPU, my best optimization work could increase let's say from 20 frames, by improving the CPU code like: 60% GPU + 40% Dalvik JIT to 60% GPU + 20% C++ will be basically like 24 frames per second. Visible, and great to have altogether. But of course, if the person profiles the code, and write just a critical code in C++, will be more or less that at least it will gain like 2 frames from those micro-optimizations and the code will be 22 frames (90% Java/Dalvik + 10% C++ to hot code) vs 24 frames per second (all C++).
                In most cases Mono would perform closer to C++ code (even faster with LLVM backend in mathematical code), in fact so close (my experience is around 10-20% in all integer based code, and maybe 50-60% in FPU based code - all without LLVM backend), that doesn't even make sense to talk of 100% theoretical speedup, that sometimes may not happen, simply because the generated image it grows too big for real life, and will not fit in CPU cache, that is a bigger performance killer (orders of magnitude is a cache miss, than some instruction arrangement that C++ provide).
                What I think that for most people think about Mono, they think like a slower Java. And Java was used to be slow, so who would want a slow, weak Java, Microsoft-backed in Linux. This is an urban myth, because sometimes Mono is faster than Java, at least because can use structs (to do stack allocations instead of heap allocations), it have a free AOT (that makes the application to run as a fully statical compiled code), so big applications start faster if they are written in Mono than in Java.
                At the end I'm not a Mono/.Net apologist, I think that people got wrong values by taking myths and others opinions and are not realistic about what to expect. In fact there was always a project close to Mono, that have a visual designer, that was pre-compiled and no-one here talks. It supports Android, iOS, Windows, Linux. For the ones that did not already have the answer, is FreePascal/Lazarus. It is a some-what clone of old Borland Delphi. No one brags about its advanced code generator (which is just marginally faster than Mono's but slower than GCC), or about how Embarcadero would attack Lazarus and kill it for a patent war.
                Also is amazingly easy to simply run a program, see if it fits your needs and if it does, use it, if it doesn't, skip it. I am using RubyMine (a Java based IDE) for Ruby for most small tasks of processing things (I don't use Rails). And I use JRuby as the VM. I never made a comparison how slow is Java in and IDE, because after is warmed up (which means like let's say 5-10 seconds of CPU time), everything really works smooth. JRuby/and Ruby too are somewhat slow, certainly slower than Mono, but amazingly good enough.

                Comment


                • Originally posted by BlackStar View Post
                  @XorEaxEax, this is a recurring discussion happening roughly every 6 months.
                  Ahh, so chances are there's a reason I'm starting to feel like Bill Murray in Groundhog day, I suppose you are too

                  Originally posted by BlackStar View Post
                  We don't really disagree in essence, C++ is a faster language than C#. What we disagree on is the conclusion: C++ is faster than C#, so C# can't be used for games.
                  Well, I never said (did I?) that C# can't be used for games, my point was that if you are a C# programmer then using it to write games makes perfect sense, what I disagreed on is the idea that C# has some properties making it particularly appropriate for game writing as opposed to many other languages. Again, the only reason I see for 'choosing' C# for game programming if you are not already fluent in the language and love coding in it is if you target the XNA platform as there are IMO no inherent game programming advantages in the C# language itself.

                  Originally posted by BlackStar View Post
                  1. the existence of the GC does not preclude smooth performance.
                  No but in order to ensure it you need to code around the GC, thus making it a burden rather than something which supposedly allows you to ignore memory management altogether.

                  Originally posted by BlackStar View Post
                  Essential memory management optimizations apply equally well to C# as to C++ (i.e. stack allocations, memory pools). Memory mismanagement will cause problems in both cases (i.e. frequent new/delete objects during the game loop).
                  Sure but in unmanaged code you can fully control the allocation/deallocation of objects during game loop and spread them out so as not to impact updates per frame, you don't have this control over the GC which is why you need to code around it so that it won't slow your game down by triggering unwanted garbage collecting sweeps or compactions.

                  Originally posted by BlackStar View Post
                  3. Indie games are usually GPU-limited not CPU-limited. By far the most CPU-intensive part is physics, and you can always use a C++ library for that. No reason to implement your whole game in C++!
                  Sure, but if you are to offshore everything performance-bound to native code libraries then there's a ton of other 'glue' languages you could use just as well as C#, which has just as much and likely more platform support and are easier to develop in. I've personally played around with both l?ve (lua) and pygame (python) and find them to be great such solutions, particularly for prototyping stuff.

                  Of course then there's languages directly targeting indie game development which is much easier still when targeting multiple platforms, and programming with games in mind. Monkey springs to mind as something new I came across which allows you to write code in a language tailored for game programming and then have that code translated to C++, Java, C#, Actionscript, Javascript depending on your needs.

                  Again I fail to see what would make C# a language which you'd want to _learn_ in order to write games as opposed to tons of other languages, again unless you are specifically targeting the XNA platform.

                  Originally posted by BlackStar View Post
                  And a final point I haven't raised till now:
                  4. Recompiling C# code on a different platform is much easier than porting C++ code. Anyone who has had to port between Linux/GCC, VisualC and Mac (either compiler) will readily atest to this. The more compilers you add and the more features you use, the worse the situation becomes (use C++11 and you are SOL). C# is a walk in the park in comparison, which is a pretty nice for cross-platform games.
                  Well, granted I'm only compiling code between Linux and Windows at work but since we use GCC on both architectures it's not a problem, and it's not as if you can't specify a targeted standard C89/C99/C++98/C++/C++11 -pedantic etc which you know is supported across the compilers you want to use. I don't see that as a general problem at all, which is further backed up by all the C/C++ code flowing between platforms like Linux/Windows/OSX etc.

                  I guess we have to agree to disagree, as it seems this discussion is going nowhere (again?). We'll simply have to see what happens.

                  Comment


                  • Originally posted by XorEaxEax View Post
                    Sure, but if you are to offshore everything performance-bound to native code libraries then there's a ton of other 'glue' languages you could use just as well as C#, which has just as much and likely more platform support and are easier to develop in. I've personally played around with both l?ve (lua) and pygame (python) and find them to be great such solutions, particularly for prototyping stuff.
                    They are great solutions, nothing against them. C# is yet another great solution.

                    Of course then there's languages directly targeting indie game development which is much easier still when targeting multiple platforms, and programming with games in mind. Monkey springs to mind as something new I came across which allows you to write code in a language tailored for game programming and then have that code translated to C++, Java, C#, Actionscript, Javascript depending on your needs.

                    Again I fail to see what would make C# a language which you'd want to _learn_ in order to write games as opposed to tons of other languages, again unless you are specifically targeting the XNA platform.
                    In the sense, why would you ever learn any language other than C++?

                    Every language has its advantages. I find that C# has a great combination of code expressiveness, performance, supporting tools and documentation. Other languages have these features in different combinations but, as a whole, I find C# ranks among the best.

                    And not only for XNA. C# has much cleaner OpenGL bindings than Java or even Python.

                    Code:
                    // Java (LWJGL)
                    float[] dataArray = { 1, 2, 3 };
                    FloatBuffer data = ByteBuffer.allocateDirect(vertexBufferDataArray.length * 4).asFloatBuffer();
                    GL15.glBufferData(GL15.GL_ARRAY_BUFFER, data, GL15.GL_STATIC_DRAW);
                    GL20.glCreateShader(GL20.GL_FRAGMENT_SHADER);
                    GL11.glBegin(GL11.GL_TRIANGLES);
                    GL11.glVertex3f(0.0f, 0.0f, 0.0f);
                    
                    // C# (OpenTK)
                    var data = new float[] { 1, 2, 3 };
                    GL.BufferData(BufferTarget.ArrayBuffer, data, BufferUsageHint.StaticDraw);
                    GL.CreateShader(ShaderType.FragmentShader);
                    GL.Begin(BeginMode.Triangles);
                    GL.Vertex(0.0f, 0.0f, 0.0f);
                    I don't know about you, but I find the second has significantly less visual noise and is much easier to use (with strongly-typed parameters, that get autocompleted by the IDE). The Java code is simply painful, in comparison: it repeats "gl" and "GL" up to six times in a single line, it destroys autocomplete ("GL11." will bring up a list ~800 symbols), it doesn't offer any semblance of type safety (you can pass GL_TRIANGLES to glCreatShader() and the compiler won't object).

                    I simply don't know how people put up with this madness in 2012. I've actually written (and released) my own C++ header to take care of the problem and I'm now working on Java and Python bindings.

                    Well, granted I'm only compiling code between Linux and Windows at work but since we use GCC on both architectures it's not a problem, and it's not as if you can't specify a targeted standard C89/C99/C++98/C++/C++11 -pedantic etc which you know is supported across the compilers you want to use. I don't see that as a general problem at all, which is further backed up by all the C/C++ code flowing between platforms like Linux/Windows/OSX etc.
                    The problem is that you can't use very valuable features of C99 and C++11 because the compiler support simply isn't there. Contrast with C#/Mono that has already implemented the C# 5.0 feature-set (async, etc) *months* before the official 5.0 release.

                    I guess we have to agree to disagree, as it seems this discussion is going nowhere (again?). We'll simply have to see what happens.
                    The journey is the destination, my friend.

                    Comment


                    • Originally posted by XorEaxEax View Post
                      Well, I never said (did I?) that C# can't be used for games, my point was that if you are a C# programmer then using it to write games makes perfect sense, what I disagreed on is the idea that C# has some properties making it particularly appropriate for game writing as opposed to many other languages. Again, the only reason I see for 'choosing' C# for game programming if you are not already fluent in the language and love coding in it is if you target the XNA platform as there are IMO no inherent game programming advantages in the C# language itself.
                      (...)
                      No but in order to ensure it you need to code around the GC, thus making it a burden rather than something which supposedly allows you to ignore memory management altogether.
                      (...)
                      Sure but in unmanaged code you can fully control the allocation/deallocation of objects during game loop and spread them out so as not to impact updates per frame, you don't have this control over the GC which is why you need to code around it so that it won't slow your game down by triggering unwanted garbage collecting sweeps or compactions.

                      Of course then there's languages directly targeting indie game development which is much easier still when targeting multiple platforms, and programming with games in mind. Monkey springs to mind as something new I came across which allows you to write code in a language tailored for game programming and then have that code translated to C++, Java, C#, Actionscript, Javascript depending on your needs.

                      Again I fail to see what would make C# a language which you'd want to _learn_ in order to write games as opposed to tons of other languages, again unless you are specifically targeting the XNA platform.
                      I'm fluent in C++ (in fact I think almost everyone in this forum knows a form of C/C++), and I see C++ particularly good for a lot of games. And Miguel's point was that when used as defining logic (that is most of the times defined in scripting), you should write in Mono/C# that makes a lot of sense, both performance wise (at least compared with any other mainstream scripting in games, that is basically a CompanyTmScript or Lua) and as quality of language.
                      As I think everyone that uses both C# and C++, C# is nicer to work with, is more forgiving of double deletion pointers and circular references and use boost::smart_ptr templates.
                      If compared with Lua, it gets even better, as Lua is slower performer (at least in interpretation form), have no big advantage excluding a very small runtime, and it offers the same memory management, using a GC, like is described here: "Lua does automatic memory management. A program only creates objects (tables, functions, etc.); there is no function to delete objects. Lua automatically deletes objects that become garbage, using garbage collection. "
                      Both Mono (SGen) and Microsoft's .Net allow to set memory pressure limits so you can have predictible GC. From .Net 3.5 SP1, you can gen notifications on Full GC (like described here), so you can do something just before "unexpected" breaks may occur.
                      MovEaxZero, I noticed that you are really strict on break times that GC would do, why you don't take in account that once when you allocate with malloc/new, the OS may swap on disk, and this would be in times similar with full GC. If the disk was in standby, it can take even seconds (the games rarely use disk drives, but if the OS would find that would have to swap, and your game would freeze for 0.5 seconds everyone would notice). What an user would do!? Maybe would delete the game because of this? I don't think so.
                      Instead of thinking how to handle full GC, simply avoid this! Is easy: allocate most objects on stack, the heap allocated objects, make them small and with few dependencies, so are entering in first generation GC, to not fill the old generation GC, and affect full GC time, if ever is occurring. If you need to allocate a huge block of objects, preallocate them in object pools. Do you want to avoid the full GC? When an user enters in menu, just before, do a full GC, an user will notice a small lag in loading the menu, but it would wait anyway, and will have a "Loading..." text, so he can expect for this.
                      What I say, are common sense advices, and following them, at least in the game loop logic kind of way, I am almost certain you will not face breaks on GC (don't use strings, are unlikely needed anywhere in a game).
                      Would you want to learn C# for making a game? If you have 5 platforms to support: Windows, Linux, OS X, iOS and Android (to not say WP7, that you have no choice), most likely the answer would be, it makes more sense to use C# (if the company pays the licenses) than C++. C# is basically free on three platforms, if you take better tools, and around of the price of the IDE in Windows (the price of Visual Studio), you will get the Mono tools for iOS and Android.
                      C++ is painful to use it with new features (OpenMP for example is not supported on Android, like described here), the Visual Studio is using Microsoft compiler, the Linux and Android GCC, OS X and iOS would use CLang/LLVM, every with small incompatibilities.
                      You have still Lazarus (said in the previous post), if you like Pascal, and you don't want a GC based language. But you cannot use for scripting, but all the rest are fine.

                      Comment

                      Working...
                      X