Announcement

Collapse
No announcement yet.

Miguel de Icaza Calls For More Mono, C# Games

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • BlackStar
    replied
    Ciplogic, why are you even replying to Togga? It's obvious he's trolling, and badly at that, too.

    Here is something interesting that caught my eye today:
    Originally posted by https://code.launchpad.net/~vanvugt/compiz-core/fix-940139/+merge/94715/comments/204303
    The performance implication is huge. Compiz was spending most of its time in malloc/free, which was mostly due to "new PrivateRegion" and "delete PrivateRegion". So PrivateRegion had to go. Although after this fix most of compiz' CPU usage is still due to excessive malloc/free/new/deletes, at least it's no longer mostly due to CompRegion.
    This goes to show that code with inefficient memory management will perform slow in any language. Not only that, but the relevant optimizations are pretty much identical in all languages: remove memory allocations from the fast path.

    Leave a comment:


  • ciplogic
    replied
    Originally posted by Togga View Post
    No. I'm not confused by this. I know I have to port everything I use including the VM and all libraries which might be a subset of .NET. The VM overhead to begin with is just not justifiable for some syntax sugar that I can get anyway if needed. .NET platform itself does not have any valuable thing to offer here, may it be 2Mb or 20Mb. Besides this, we already seen that C# is more verbose than necessary for GL calls. Here syntax doesn't even appear to be it's strong side. It's then a lose-lose situation.
    The using OS overhead to draw things is not justifiable. Find an OS to not occupy today at least 100 MB of RAM and 1.5 G of hard (this is XP) or 400 MB of RAM and 10 G of HDD (Vista/7).
    Verbosity as looking for safety, you just pawned yourself by showing that enums are more verbose to solve the very some problem.
    If you mean verbosity in typcal code: C++ you have to declare a lot of things twice, and even an empty class is longer in C++:
    class Class {} //C# code
    class Class {}; //C++
    If you mean that using is longer that #include, seems not to be the case, if my counting helps.
    If you mean to use type inference from C++11, auto is longer than var.
    So where is shorter? class A {public: void test(); } ; void A::test() { } is longer for me than: class A { public void test() { } }

    Maybe some STL would save us:
    Code:
    int myints[] = {32,71,12,45,26,80,53,33};
      vector<int> myvector (myints, myints+8);               // 32 71 12 45 26 80 53 33
      vector<int>::iterator it;
    
      sort (myvector.begin(), myvector.end());           //(12 32 45 71)26 80 53 33
    Comparing with what C# offers:

    Code:
            var myints = {32,71,12,45,26,80,53,33};
    //default (using generics):
    Array.Sort( myints );
    //Linq
            // Query for ascending sort.
            var sortAscendingQuery =
                from item in myints orderby item select item;
    And to not forget, because you have a GC, you don't have to write destructors in most cases, isn't so? Don't you enjoy to write them?

    At the end you appear to not be consistent with your values, because you seem to appear to accept to use an inneficient hard disk data format (like using ini files or Xml), that would give to you many times slowdowns, than to use something that brings 50% of assembly speed. And if you do programming for real, maybe you would notice that disk operations are orders of magnitude slower than ever Mono could be. Mono may be two times slower, it may use some MB. How many? Not so many...

    A Hello world in Mono? Some KB, under 10. A hello world in C++, give to me your numbers. Count them in many applications, and you will think twice on saved space... seems you are so concious to save disk space: start with C++, is a great place to start shed off

    "I have to port everything I use including the VM and all libraries which might be a subset of .NET" Really? Do you have to port the VM?
    It sounds like you are porting the compiler every time you change the computer. I pitty you for bad life you have... but most people work with mainstream compilers and platforms
    I warmly recommend Mono/.Net
    Even if you use Ubuntu/Suse/Windows XP/Vista/7 you will have support for a big set to work at least with files, console and web to make tools. You don't need to recompile a thing, is all done by Microsoft and Xamarin engineers and other contributors.
    Last edited by ciplogic; 29 February 2012, 08:27 PM.

    Leave a comment:


  • Togga
    replied
    Originally posted by BlackStar View Post
    Not necessarily. You seem to be confusing the .Net BCL with the .Net VM.
    No. I'm not confused by this. I know I have to port everything I use including the VM and all libraries which might be a subset of .NET. The VM overhead to begin with is just not justifiable for some syntax sugar that I can get anyway if needed. .NET platform itself does not have any valuable thing to offer here, may it be 2Mb or 20Mb. Besides this, we already seen that C# is more verbose than necessary for GL calls. Here syntax doesn't even appear to be it's strong side. It's then a lose-lose situation.

    Leave a comment:


  • BlackStar
    replied
    Originally posted by XorEaxEax View Post
    Not really following this, you can create a macro which reports any glError codes during runtime together with which source file and line number the error took place on. You can also make it so that the checking can be disabled and thus not have any performance impact on final builds. Something like this:

    Code:
    #ifndef _GLERROR_H_
    #define _GLERROR_H_
    
    #include <stdio.h>
    #include <GL/gl.h>
    
    #define _GLERROR_ENABLED_ // comment out to disable glGetError() checking
    
    #ifdef _GLERROR_ENABLED_
    #define GLERROR() { int foo = glGetError(); if(foo != GL_NO_ERROR) printf("GLError:%d in file:%s at line:%d",foo,__FILE__,__LINE__); } 
    #else
    #define GLERROR()
    #endif
    
    #endif
    This would catch faulty parameters during runtime, giving you the file and line in which they occured and not force you to wait for some tester to report some texture bug. Granted it's nicer if the compiler catches the error but it's not something I would switch language for.
    (a) This relies on the programmer to insert GLERROR() calls at proper places.
    (b) This only detects errors at runtime, even though they could be detected by the compiler.

    These issues might not matter for trivial applications. But what if you are developing a non-trivial game and the faulty code is only executed near the end of level 5, when the player tries to enter a non-essential secret area? It might be weeks before a tester encounters and reports the issue!

    Bugs like this do happen and do go unnoticed (ever played any of the Elder Scrolls series?) The larger the application, the higher the chance of obscure bugs, and the higher the value of compile-time error checking. This is precisely the reason we have moved from assembly to C, to C++ and to other things.

    Btw, C#/OpenTK not only detects errors at compile-time, it also inserts GL.GetError() calls automatically when running in debug mode. This is a huge safety net that you can't really appreciate before you use actually use it. OpenGL is much smoother in C# than in any other language I've ever used.

    Leave a comment:


  • XorEaxEax
    replied
    Originally posted by BlackStar View Post
    Waiting for some poor tester to report a random texture corruption bug a few days later is just not the same.
    Not really following this, you can create a macro which reports any glError codes during runtime together with which source file and line number the error took place on. You can also make it so that the checking can be disabled and thus not have any performance impact on final builds. Something like this:

    Code:
    #ifndef _GLERROR_H_
    #define _GLERROR_H_
    
    #include <stdio.h>
    #include <GL/gl.h>
    
    #define _GLERROR_ENABLED_ // comment out to disable glGetError() checking
    
    #ifdef _GLERROR_ENABLED_
    #define GLERROR() { int foo = glGetError(); if(foo != GL_NO_ERROR) printf("GLError:%d in file:%s at line:%d",foo,__FILE__,__LINE__); } 
    #else
    #define GLERROR()
    #endif
    
    #endif
    This would catch faulty parameters during runtime, giving you the file and line in which they occured and not force you to wait for some tester to report some texture bug. Granted it's nicer if the compiler catches the error but it's not something I would switch language for.

    Leave a comment:


  • ciplogic
    replied
    Originally posted by Togga View Post
    Mono (and .NET) is just an unnecessary baggage which don't bring me anything of value. If I would target client applications on the web (possible valid cause for a VM), Java/javascript is the king out there and html5 is the future.
    Syntactic sugar doesn't solve any real world problems.
    You're right (on syntax sugar). I do think that doesn't bring to you anything of value. The people that defend C#/Miguel/whatever are somewhat tired of the myths that grow big for emotional reasons (like: is Microsoft Project, or other matters like that). We don't hold sacred either C# or the its VM designs. In many instances we would use C/C++. If we need to make an application that makes an updater, is easier to take already made WebClient class, combine with Windows Forms framework and ask the user: a new update is available, do you want to download it? And on a separate thread to do the downloading of the upgrade. If you target original Windows XP or upper, .Net is already installed, and an application like it, would be like few KB of extra functionality. To do it in C++ would make it really harder. How would you download a file from internet to disk, if you know the URL? (I honestly don't know, call wget!? )
    If you say about JS and its powers, I think that we also agree, that is the most desirable experience. In fact JS/Html5 is one of the examples that is mature yet is not so consistent as performance and support, as is Mono fares much better for that matter when compared with .Net. If you target a game, should it use CSS that is accelerated by IE9, or should you use WebGL? Should you use a computational kernel in JS to make a real-time game? Here is where JS still suffers (compared with Mono, the Firefox GC implementation does not have a too fast but soon it will GC, even being said that, it gets better and better).
    JS is part of QML, which is an extra baggage too, yet it does it well enough to work with Symbian Nokia phones to desktop applications.
    At the end: syntax sugar is just about abstractions. When using smart pointers, the boost::smart_ptr (if I recall it right), does increment/decrement references so the user will not miss the count. For skilled programmers, may appear to not be necessary, but when you work with multiple frameworks, is really hard to track all the references all around the application. Also many syntax sugars do solve some real world problems: Generics/templates enforce some typical mistakes that do user face. A for loop (a C one) sometimes is reverted by compiler, and some subexpressions are removed out of the loop (google for Loop invariant code motion), and the loop itself may be repeated many times to make easier auto-vectorizations and improve the branch prediction of a specific CPU. Some abstractions we are comfortable with (virtual ... = 0; code which some methods can be virtual and abstract like: virtual void Drive() = 0 { TurnWheel(); } ), or multiple inheritance, when other languages do not accept this. Which is better?
    As for me is hard to choose, but anyone thinks different about what is better, so is hard to judge from outset of your particular skills that is great.
    The last thing: "tail" MSIL instruction was just a power of the CLR, that made no sense for C++ world for that matter, or C# world (maybe a bit for Linq, but this is other matter), as is a functional language feature. Tail works just like "register" keyword in C, that would hint for compiler to try to put a variable into register.
    Last edited by ciplogic; 29 February 2012, 09:48 AM.

    Leave a comment:


  • BlackStar
    replied
    Originally posted by XorEaxEax View Post
    Well you could put the glGetError() call in a macro which is expanded into testing/logging the error or not based on a defined constant and thus could be disabled for production builds where you'd not want the performance loss.
    That's still runtime checking, not compiler checking. If I pass the wrong parameter to a function, I want to rely on the compiler to stop me. Waiting for some poor tester to report a random texture corruption bug a few days later is just not the same.

    Leave a comment:


  • BlackStar
    replied
    Originally posted by XorEaxEax
    Well, as I said I've just looked over the language as of yet and made some short snippets to get a feel of it so I'm really not qualified to make that judgement, but for me personally the big reason I found the language interesting was that it had built-in concurrency properties (goroutines, channels).
    Well, I've been toying around with Go these past few days and it has morphed into a pretty interesting little language.

    The lack of object hierarchies and implicit interface implementations are surprisingly nice features! I've long campaigned in favor of implicit interfaces (if my object implements Foo() I should be able to pass it to a method accepting IFoo) and the rationale in Go makes lots of sense (more modular code). Object composition is much better aligned with game code than deep object hierarchies, so that's a big win too.

    The only downside is that there's no MSIL compiler yet, so I can't easily combine it with my existing codebase (~1MLoC).

    Originally posted by Togga View Post
    Yes. As with .NET, it will not be more stable than the underlying libraries that is putting restrictions on the system. My point was that this language is not having the overhead of a virtual machine to get all features and works, wherever C works. There is a big difference getting gobject library to compile on a new platform than bring over .NET there.
    Not necessarily. You seem to be confusing the .Net BCL with the .Net VM. They are completely separate things. The .Net VM itself is written in C++, is self-contained and is quite portable - there are people who have ported it to the PSP, to Maemo and half a dozen other exotic platforms. The BCL, on the other hand, is huge. The core parts are written in C# and are portable, with some work, but other parts (like System.Windows) are inherently non-portable.

    Fortunately, you need very few things to write games: System, System.Core, System.Net and System.Xml, more or less. Add an OpenGL and an OpenAL binding and you have everything you need, in a tiny, portable package (~4MB compressed).

    It's not as if C++ doesn't have the same problem: by the time you add pthreads, tinyxml, sdl, a network library and a scripting library, you've reached the exact same overhead. This is stuff you need, one way or another - no matter the language.

    Unfortunately, most people think that writing a .Net game requires the installation of a huge VM with hundreds of megs of stuff. That's simply not true: 4MB is all you need.

    I don't have much Vala-coding experience myself but I have good experience with some applications written in Vala in terms of performance and resource consumption. See for instance Shotwell : http://yorba.org/shotwell/
    Vala is pretty nice. It's C# for people who want to use C# without the whole .Net package. This actually proves how nice a language C# really is, even if you don't like .Net itself.

    Leave a comment:


  • Togga
    replied
    Originally posted by ciplogic View Post
    You say that C# (I think you refer to it) it brings only restrictions to the table.
    No, I mean .NET and the CLR, MSIL and all restrictions they impose on you along with the baggage you have to drag with you which platform you choose to target.


    Originally posted by ciplogic View Post
    In fact there are instructions in MSIL (for example "Tail" which will make a tail call, that you cannot call in C#, even 4.0, but is specially used for functional languages).
    Here you mention one of them, the restriction to MSIL. Going native, you don't have any such restrictions and could go creative all the way down to the core.

    Originally posted by ciplogic View Post
    In fact every compiler/platform brings restrictions.
    Sure. You only have to watch out for what gives you the least restrictions and/or the most possibilities. .NET for instance is itself not written in .NET.


    Originally posted by ciplogic View Post
    what bothers you? The idea that .Net is too big?
    It just don't add any value to me, just restrictions and unnecessary baggage. Bringing in a VM should be a separate architectural decision for a product not depending on "nice syntax" etc.


    Originally posted by ciplogic View Post
    If you mean that C++ applications have any benefit to portability, compared to Mono, I'm curious of your platform to develop in, is it Qt?
    QT is a really good framework even though I think that a pure C API is cleaner and that they put a little too much features under their umbrella. If you separate logic from UI well enough the UI could be anything from a script to a web-page or even automatically generated using any framework as a backend.


    Originally posted by ciplogic View Post
    Yes, a lot of them are generated and are good ones. Are not optimized, if you mean the "best UI" means that they will not set twice a property.
    I mean "UI" from a user perspective.

    Originally posted by ciplogic View Post
    Did you ever looked the generated Vala code? Is mapping 1:1 to C? Or by runtime overhead, you mean, nothing else than GObject? And it is a bit buggy on Windows!? Or that it doesn't have annotations in the C# sense? And it doesn't have dynamic keyword, or PLinq.
    Yes. As with .NET, it will not be more stable than the underlying libraries that is putting restrictions on the system. My point was that this language is not having the overhead of a virtual machine to get all features and works, wherever C works. There is a big difference getting gobject library to compile on a new platform than bring over .NET there.

    I don't have much Vala-coding experience myself but I have good experience with some applications written in Vala in terms of performance and resource consumption. See for instance Shotwell : http://yorba.org/shotwell/


    Originally posted by ciplogic View Post
    Should we write only binary ...
    Should we write for saving data our back-store database and not using a database? ...
    I'd say be creative and go with sound experience. If in doubt, go the route with the least restrictions (lets you move in either way), make smart simple designs (not inventing problems that aren't there) and library choises.
    Last edited by Togga; 28 February 2012, 03:59 PM.

    Leave a comment:


  • Togga
    replied
    Originally posted by ciplogic View Post
    - Boo is an interpreted language if you use it with booi (and will get the "batteries" = a lot of utility libraries of .Net/Mono).
    Well, this is not an option for all reasons named in previous posts. If I want performance, I certaintly not choosing .NET. If I want existing Python code to go faster, i'd use PyPy or compile to native through it's LLVM backend. Both are better than dragging in a VM with a in practice non portable framework. As I said before, when I want performance, i'll go directly native with full control of the situation. I've yet not failed to get the batteries I need in C/C++.

    Originally posted by ciplogic View Post
    It is Python inspired
    Python inspired does not give all batteries Python have (like scipy/numpy). Going from one virtual machine to another doesn't add any value. Python is not my choise for performance.

    Originally posted by ciplogic View Post
    May you give a circumstance where Mono or .Net it was too slow? May you give a piece code that runs so slow, that Mono doesn't worth considering?
    Mono (and .NET) is just an unnecessary baggage which don't bring me anything of value. If I would target client applications on the web (possible valid cause for a VM), Java/javascript is the king out there and html5 is the future.

    Syntactic sugar doesn't solve any real world problems.

    Leave a comment:

Working...
X