Announcement

Collapse
No announcement yet.

Miguel de Icaza Calls For More Mono, C# Games

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • squirrl
    replied
    Originally posted by Togga View Post
    Well, lets summarize .NET:
    * you have to go "all in" .NET since calling .NET components from the outside is a real pain
    * you are tied to a GC (with platform dependent implementation)
    * you are tied to an object oriented design with no multiple inheritance
    * you have to bring the .NET runtime _AND_ libraries to every platform you target
    * MSIL and CLR/JIT restricts what you can do and how creative solutions you can invent
    * low end platforms are not suitable for .NET
    * Reduced number of languages to choose from
    * Languages other than the standard # must be specialized and restricted to .NET (you can't just use upstream compilers and libraries)
    1. All in? As opposed to? Fundamentally, there is no platform today that you get to pick and choose from; examples are Gnome and KDE.
    2. Yeah, #1
    3. Negative and it's been proven a well thought out design doesn't need that mess.
    4. #1 again.
    5. So does Java, C++, and everything else. Those are the boundries.
    6. Malarkey, .Net has been out for more than 10 years. My P3 Celeron / 256MB ram ran it with no problems. Quake 2 was ported to it.
    7. F#, C#, C++, Python, VB, and you can even program in the native assembler. That's more than most Linux distributions ship with.
    8. Upstream is a pipe dream; haven't you learned developers only listen to money stuffed in their ears. Unity, Gnome Shell and KDE...

    When you graduate from the university and attain that first real job, come back and update your post. Reflect on lessons learned.
    Let us know how rewriting spaghetti code has changed your mentally. When you don't have 6 months to embrace a multiple inherited
    monster strewn about 6 levels of directory paths let us know how many bottles of Bismuth you went through before you switched jobs.

    Respectfully,.

    Leave a comment:


  • ciplogic
    replied
    Originally posted by Togga View Post
    (...)
    And we only had some syntactic sugar on the positive side?
    (...)
    I do not know why you're pushing .NET/Mono so hardcore, it has no obvious advantages, some obvious disadvantages. The problem space out there is in general so complex that you can't just stick with one solution. Given this and that it's always tough to predict the future the rule should be "be humble and choose the least restrictive (for future changes) tools for the problem". If you ask me, i'm starting to smell pure marketing...
    I said to you, I don't push .Net/Mono so hardcore. If you noticed, I give real numbers most of the time, I don't try to over-simplify the advantages of a language to another. I do see C++ advantages at times, and for example when I benchmarked Paint.Net, Gimp and Pinta, I've notied that if it would be on a single core machine certainly Gimp would be the fastest. So I said is a too simple to judge the idea of C#/Mono is slow just because C# code generator gives a code that may be 15 % slower when is also possible that C# can get extra performance using all cores.
    I also worked a lot of years in C++, in CAD, aviation and such. CAD I can say that doesn't need more than .Net's performance, because all CAD packages include .Net plugin support. And even there is a performance hit, there is not such of a hit. Aviation, is very performance dependant, yet it does develop most of things in: C++ (of course), Ada and ... Java. And if you think that is just because Java is too slow was adopted later, as far as I know from inside, it was that there was not enough expertise to work with Java, and when there were more people knowing it, Java become popular (was the second option as popularity, the first being still C++).
    I never worked in embeded, even I made (somewhat) programs for Android phone(s), and one of them there is a chess engine (is not mine, is my brother's one). It worked visibly slow with the interpreter, it answers too fast in the Android JIT. Means to wait for the next move it is in range of 0.1 - 0.2 seconds. I know, sometimes this range could be improved further using C++, but to use C++ for example you have to raise the platform requirements to Android 2.1 (talking about restrictions), as is the first platform that expose the JNI to endusers.
    Also I think that is naive to think just syntactic sugar a Mono platform, as most people think about C# at least as a language with batteries (which you simply seem to ignore it), and the integration of an extra library in most times is way easier (adding a reference to an assemby means that you don't have to set: headers, environment variables and linker libraries).
    As for me, the most important things that I love about C# is closer to Python world than C++ could be, it gives adequate performance for most tasks, easier than Java interoperation with C++ (in fact is somewhat easier than C++ way as a lot of times, when you export things in C++ from a DLL you have to set macros, for calling convention like __stdcall for Windows, __cdecl for unixes, extern "C" and so on).
    The restrictions (if you mean that you an double delete a pointer, or you get over an array size) sometimes I would argue are fairly good to have. You can achieve this with STL too, but if you work with big applications and you don't put const vector<...> & everywhere, or vector <...>& like a lot of beginners don't, you will get a slower performing application than the C# equivalent, regardless of your expectations of C++ being the fastest. I do know that most operations with string/pointer arithmetics for example can be really a great thing you may achieve in C world (if you work in this world), but don't forget that at least C# gives to you the unsafe mode, which makes possible to do pointer arithmetic for image manipulations (this is the Pinta's code with effects: they get a pointer to every row and they process pixels of that row).
    At the end, one thing I agree with you: most CAD systems were written like 20 to 30 years ago and they just added features. As Java/C# is fairly new for these codebases, they were added later. But people making plugins for them they needed C/C++ regardless of CAD's vendor preference. The problems you may need to solve need to talk a language that most things do know. This makes clear that most of tools have to expose C (I think C is "lingua franca" of interoperability, not C++, and luckily C# can interact with it directly, and Java 7 also made this possible via a decorated interface over libFfi). Also, as the maturity of the codebases is a big elephant for most companies, to work with a very cool platform but without support, may be really hard way to go. So C I can see it at least the frontend to expose a lot of functionality if you want to work for future with. If in the context of games, if you make a library that you want to have as many customers, you may write it into C/C++ and expose a C frontend and this part is really great.
    If you talk anyway for tools, like the level designer, the "sugar" is in tooling, that lets you make fast UIs, process easily xmls, connect to database, developers may want to write it in let's say Java. The engine of the game can be exported in a lib.so and embedded in a window. This also is better to isolate when the problems are with engine and when are with tooling. The talking: why Java and why not Python, makes no sense here, in the point of: yes, Python is great is another VM. Is not to "demonize" the solution of Java, but Java would bring close to C speed (let's say even with a penalty, but if you will have a loop to write and you want that the loop to be fast, you will simply write in Java, and not write in C and import it as a Pyhton extension, because is too slow).
    If you talk about most of tasks, at least where I'm working (I work in C#, but I like it, more than the Delphi coding that I do at times), most of the times to increase performance is simply to put it on a separate thread. This was talked earlier but as machines have multiple cores and is easy(er) to use from a C# like language (Go can be another), the performance loss from C++ code generator is more than compesnated by using (more) of the cores that the machine it has.
    Last edited by ciplogic; 03 March 2012, 07:10 AM.

    Leave a comment:


  • Togga
    replied
    Originally posted by ciplogic View Post
    Mono's runtime DLLs (mono.dll, mono.exe, system.dll, system.Xml.dll) on Windows are around 4 MB
    Here you "forget" the fact that if you have to do something besides just running the code you must bring in additional frameworks (ADO,GUI,...), which in most cases (where performance matters) just are wrappers against C frameworks or written in C. You also may end up writing wrappers around existing C libraries.

    Also, do not forget that you today easily can target a VM with C/C++ using for instance LLVM, adding one JIT to the target but requires no additional "framework", just the usual C libraries installed or linked to the JIT.

    Well, lets summarize .NET:
    * you have to go "all in" .NET since calling .NET components from the outside is a real pain
    * you are tied to a GC (with platform dependent implementation)
    * you are tied to an object oriented design with no multiple inheritance
    * you have to bring the .NET runtime _AND_ libraries to every platform you target
    * MSIL and CLR/JIT restricts what you can do and how creative solutions you can invent
    * low end platforms are not suitable for .NET
    * Reduced number of languages to choose from
    * Languages other than the standard # must be specialized and restricted to .NET (you can't just use upstream compilers and libraries)

    And we only had some syntactic sugar on the positive side?

    I don't count the JIT or the GC on the positive side here as you can use them in most other languages as well (f.i. C/C++) and they are just a burden if you don't need it.

    Compiled binary size might be a factor if MSIL should be much smaller than other intermediate formats (LLVM) or compressions. I don't know this but in practice however, the additional size for a register based VM or native code is usually worth it.


    Originally posted by ciplogic View Post
    Android would "reincarnate" in more platforms/hardware combinations like: MIPS, x86. Is impossible to target with a static approach any future CPU.
    C/C++/etc isn't any more "static" than C#, they are just different languages. But if you interpret ".NET" as dynamic, this is mostly just bullshit since in general, the more abstractions you have, the harder it is to optimize for new types of situations. And what if the GC is the worst component here?

    Dont forget, when a new HW platform arrives, the case has been that the HW vendors have worked very hard to make a C compiler and libraries up and running (which Mono needs anyway). If you stuck with .NET you may also have to wait for Miguel to work some additional overtime...

    I do not know why you're pushing .NET/Mono so hardcore, it has no obvious advantages, some obvious disadvantages. The problem space out there is in general so complex that you can't just stick with one solution. Given this and that it's always tough to predict the future the rule should be "be humble and choose the least restrictive (for future changes) tools for the problem". If you ask me, i'm starting to smell pure marketing...

    Leave a comment:


  • ciplogic
    replied
    Originally posted by Togga View Post
    Again, you're missing my point. If disk-space are tight, we're probably in an embedded device. Lets say we have an RTOS with a C-library. What consumes more space, a hello world in C or a hello world requiring the Mono stack to go along? For me Mono and .NET really have nothing to offer. Couldn't you prove me wrong with something other than to save a few characters of code? Nobody gets happier than me if you do, since then I have learned something.
    So your point is, if I got it right: adding for a hello world like code an extra library is extra baggage that sometimes is not justified. And I think is nothing to argue about. For making a slow phone to work with Mono may not be worthwhile, also on a very low end tablet with 400 MHz ARM and if your application is computing intensive. On the other hand, in most cases the software does not work in a lot of situations just for those devices. If we look about language popularity, Java seems to be the most popular language, and the reason may be simply that is there by default. Either installed by OEMs, or is part of the install package, Java is a lot of times included in the package. Mono's runtime DLLs (mono.dll, mono.exe, system.dll, system.Xml.dll) on Windows are around 4 MB, and I think it can be made smaller if I would go to cygwin and pick compilations flag to disable some features that come into package. But the most important part is that Windows includes at least .Net as parts of OS (like Vista or Win7), or as an update (Windows XP). Windows 8 will have also a form of .Net 4.5 as part of WinRT platform.
    So for this reason, the Hello world in WinRT world, will be just some KB on disk. It doesn't matter that .Net represents let's say 30% of Windows 8 tablet install (is smaller, but just to take as a random number to not argue about it), because the applications that we get over the internet, that would likely combine to make what Win8 represents, would not be an extra burden for developer to provide.
    In this very same way is the same with Android and Java language: it is a part of the phone, is not an extra baggage, and even better, is smaller than the corresponding full compiled code, so is always smaller than a compiled application.
    But if we talk for a custom solution, let's say a complex game, is legitimate to ask, what should be picked: a C/C++ code, or my application with an embedded VM. And the answer I think is always: depends. As for me, if the VM is not running in an interpreter, and you have a profiler (so you can write well the slow parts), and is included as part of the OS, I think is better to target the VM. Looking for leaks is always ugly. If we talk with higher end tablets, 5 MB extra for an application like this Android one (44 MB download size) is likely to not be noticed. In most cases because Mono have (more functionality) and the generics compile at runtime, so not many template expansions would make huge binaries, so you may have a smaller application to start with. Also MSIL is more compact than the binary (Dalvik presentation states that Dalvik is likely 6x times smaller than the tracing JIT generated code, a GCC like compiler can generate even bigger code).
    At the end, if the generated binary code can get like 5x bigger than the "high level" MSIL/Dalvik or JVM bytecode, as bigger is the application as bigger are the savings. At the end there is one last thing: Android would "reincarnate" in more platforms/hardware combinations like: MIPS, x86. Is impossible to target with a static approach any future CPU. So the easiest way is still to write Java, or may use Mono and just recompile, and let Xamarin to hassle with GCC flags and issues like this.

    Leave a comment:


  • Togga
    replied
    Originally posted by ciplogic View Post
    The using OS overhead to draw things is not justifiable. Find an OS to not occupy today at least 100 MB of RAM and 1.5 G of hard (this is XP) or 400 MB of RAM and 10 G of HDD (Vista/7).
    Well, there are always busybox/<some-embedded-os> combos out there. Pretty much all component choises means restrictions in some way. I think C gives me the least restrictions, everything outside that I choose carefully unless I just want to experiment, have limited scope, something to test or educate.

    Originally posted by ciplogic View Post
    If you mean that using is longer that #include, seems not to be the case, if my counting helps.
    Your correct. But my point with syntactic sugar is that I rather add a few bytes extra code than use unnecessary dependencies for production software.


    Originally posted by ciplogic View Post
    And to not forget, because you have a GC, you don't have to write destructors in most cases, isn't so? Don't you enjoy to write them?
    If needed, to use GC in C++ you can always do something like
    Code:
    gc_ptr<Class *> ptr = new Class();
    You can in fact implement any memory layout this way and do not have to give up (the syntactic sugar for) multiple inheritance in the process (which is a big loss for a language that should be object oriented). The same thing can be achieved in pure C.


    Originally posted by ciplogic View Post
    At the end you appear to not be consistent with your values
    I am consistent with my values, which is to be as flexible as possible. Maybe I use a hard drive today, but I can easily move my code to a pure RAM-based solution, etc. To use Mono for a few lines of code and some syntactic sugar is not even on the table (unless it's for fun or something other outside the professional sphere). As I mentioned earlier, there are plenty of other options if you want neat syntax, but even that is not main priority for me.


    Originally posted by ciplogic View Post
    A Hello world in Mono? Some KB, under 10. A hello world in C++, give to me your numbers. Count them in many applications, and you will think twice on saved space... seems you are so concious to save disk space: start with C++, is a great place to start shed off
    Again, you're missing my point. If disk-space are tight, we're probably in an embedded device. Lets say we have an RTOS with a C-library. What consumes more space, a hello world in C or a hello world requiring the Mono stack to go along?

    For me Mono and .NET really have nothing to offer. Couldn't you prove me wrong with something other than to save a few characters of code? Nobody gets happier than me if you do, since then I have learned something.

    And yes, I have programmed in C# for a few years (for work besides curiosity projects to learn new things), C# customers are out there also.
    Last edited by Togga; 02 March 2012, 04:07 PM.

    Leave a comment:


  • Togga
    replied
    Originally posted by BlackStar View Post
    Ciplogic, why are you even replying to Togga? It's obvious he's trolling, and badly at that, too.
    Defenitely not. But it's easier to just call people trolls than try to understand what they say...

    Leave a comment:


  • ciplogic
    replied
    Originally posted by XorEaxEax View Post
    In areas where performance is very important managed code is indeed absent (compression, encoding, rendering, etc) due to it's performance penalities.
    In fact I want to fix just one topic: encoding and rendering (even rendering may work on Java runtime as SunFlow) are really CPU dependent, yet compression/decompression is a different matter.
    Compression is written dependent by multiple dependent performance parts, like IO (most of the times), a branchy code (which depends on CPU branch predictions), it has multiple variables (so it depends on register pressure), and CPU cache saturation, and most of the times is hard to parallel them. If you take DotNetZip (http://dotnetzip.codeplex.com/) and you use compare to compress a big archive (I tried a Fedora ISO under Windows), you may be surprised that the compression runs faster from Mono (on 32 bits) than a Total Commander (32 bits). This it may be just because Total Commander uses Delphi which is known to not be *that* efficient on generating code. Also as JITs do most lilkely compile a smaller portion than all code paths do, they would likely fit better in CPU cache, so maybe just a combination of cache misses + a bit better code generator may make JIT code faster than a native compiler.
    The difference is even bigger with using the same DotNetZip if you use the default settings. As most applications are compiled with 32 bits (as they need to run on most Windowses) when in .Net if you know that you don't interact with 32/64 bit world, you will get a 64 bit version where you have more registers to spare, so you will get even faster generated code.
    In fact most Java tools work with JAR (which are decorated Zip files) which may work faster than the full access of the .class files as CPUs compensate the lower speed on rotating or normal disk. So they may be even faster than plain files or the impact is too little the SSD drives (like described here).

    Leave a comment:


  • ciplogic
    replied
    Originally posted by BlackStar View Post
    You are correct here, managed code is indeed absent in compression and encoding. However, the story is a little more complex. Code like:
    Code:
    a = b + c;
    or
    Code:
    foreach (var a in items)
        // do something
    
    for (std::vector<foo>::const_iterator it = items.begin(); it != items.end(); it++)
        // do something
    perform identically between C# and C++. There's no inherent disadvantage in a statically-typed managed language vs a statically-typed unmanaged language.
    And in fact sometimes it performs even better.
    Note: thanks for notice me about Togga Regards and have great programming time.

    Leave a comment:


  • BlackStar
    replied
    In areas where performance is very important managed code is indeed absent (compression, encoding, rendering, etc) due to it's performance penalities.
    You are correct here, managed code is indeed absent in compression and encoding. However, the story is a little more complex. Code like:
    Code:
    a = b + c;
    or
    Code:
    foreach (var a in items)
        // do something
    
    for (std::vector<foo>::const_iterator it = items.begin(); it != items.end(); it++)
        // do something
    perform identically between C# and C++. There's no inherent disadvantage in a statically-typed managed language vs a statically-typed unmanaged language.

    So why don't you see an h.264 compressor in C#? Two reasons:
    (a) compressors and encoders tend to be released as reusable libraries, which means C (because the C is the only globally-understood ABI. C# and Python can both use C - but not vice versa.)
    (b) SIMD optimizations.

    While something like:
    Code:
    a = b + c
    may perform identically between C# and C++, it can often be made 4x faster via 128bit vector instructions. Which is why you will often see hand-rolled assembly in the core of such libraries, rather than C++.

    In other words, the distinction here is not actually managed vs unmanaged, it is high-level vs low-level. The lower the level, the more micro-optimizations you can make - and these are what make the difference in tasks such as compression and encoding.

    Edit:

    Well I can't say I've done much game programming (and even less opengl programming), in fact it was many years since I did anything remotely as such, that said I can't imagine a game where you'd write special opengl code for a secret area. As I see it you write an engine which handles all drawing, and that engine is thoroughly tested during development, a secret area just like any other part of the game is defined using tools (map editor, event editor, etc) which in turn access the drawing engine using a higher level interface and not by anyone hacking together specific opengl code for that part of the game.
    Pretty much. However, it could be that a specific combination of (invalid) OpenGL states only appears at a specific location in the game. Or it could be that that area contains an asset with a problematic shader. It happens, and my point was that the more you trust your compiler the better.

    In fact, that's why the WebGL spec goes to such lengths to allow static verification of the code. It throws away pretty much all OpenGL ES parts that are not statically verifiable - and Google has built a WebGL compiler that checks your code and refuses to execute it when it detects anything unverifiable. (I'm pretty sure Mozilla is using the same compiler, at least on Windows). This is pretty much impossible in desktop OpenGL, where the default headers are full of untyped enums (ints) and void* casts. (Even the "hardened" headers I am using have issues due to the schizophrenic OpenGL design - but that's another discussion entirely.)
    Last edited by BlackStar; 01 March 2012, 08:11 PM.

    Leave a comment:


  • XorEaxEax
    replied
    Originally posted by BlackStar View Post
    (a) This relies on the programmer to insert GLERROR() calls at proper places.
    (b) This only detects errors at runtime, even though they could be detected by the compiler.
    Yes, I wasn't arguing against that, this was directed at your statement of people not using glGetError() due to performance loss, I just wanted to show that with a macro you easily enable/disable this check and thus easily avoid the performance impact when you know the code is functional.

    Originally posted by BlackStar View Post
    These issues might not matter for trivial applications. But what if you are developing a non-trivial game and the faulty code is only executed near the end of level 5, when the player tries to enter a non-essential secret area? It might be weeks before a tester encounters and reports the issue!
    Well I can't say I've done much game programming (and even less opengl programming), in fact it was many years since I did anything remotely as such, that said I can't imagine a game where you'd write special opengl code for a secret area. As I see it you write an engine which handles all drawing, and that engine is thoroughly tested during development, a secret area just like any other part of the game is defined using tools (map editor, event editor, etc) which in turn access the drawing engine using a higher level interface and not by anyone hacking together specific opengl code for that part of the game.

    Originally posted by BlackStar View Post
    This goes to show that code with inefficient memory management will perform slow in any language. Not only that, but the relevant optimizations are pretty much identical in all languages: remove memory allocations from the fast path.
    Which proves nothing other than that you can write slow code in any language, it's how fast you can get your code which is of interest. Well written native code beats well written managed code in speed/resource usage, there's really no point in arguing this.

    In areas where performance is very important managed code is indeed absent (compression, encoding, rendering, etc) due to it's performance penalities. Obviously this is not news to anyone who understands the difference between managed and unmanaged code, if managed code had it's advantages without the corresponding performance/resource usage penalty then we'd all be using managed code for everything these days. However these performance issues are real and computing power is not infinite. For a large array of problems the performance penalty makes no difference and here C#, Java, together with lots of scripting languages are popular solutions, like in the enterprise sector where once VB reigned supreme.

    Personally I enjoy both sides of the spectrum, I like the ease/speed of development in languages such as Python and I also like the low level power and flexibility of languages such as C/C++, and in particular I like how I can combine them.

    Leave a comment:

Working...
X