Announcement

Collapse
No announcement yet.

Miguel de Icaza Calls For More Mono, C# Games

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • XorEaxEax
    replied
    Originally posted by BlackStar View Post
    Hey, an actual language discussion without a flamewar! Who would expect that on Phoronix?
    Heh this thread just keeps on trucking it seems, and yes you are right it's been uncommonly civil here which is a very nice surprise

    Originally posted by BlackStar View Post
    Haven't tried Go yet, but I've read its samples and I wasn't too impressed (like I was with Ocaml and its F# offspring, for instance). What is its "killer" feature that would convert people?
    Well, as I said I've just looked over the language as of yet and made some short snippets to get a feel of it so I'm really not qualified to make that judgement, but for me personally the big reason I found the language interesting was that it had built-in concurrency properties (goroutines, channels).

    Originally posted by BlackStar View Post
    The compiler will happily accept both versions and the error will go unnoticed unless you call glGetError() - which most people don't do, since it kills performance.
    Well you could put the glGetError() call in a macro which is expanded into testing/logging the error or not based on a defined constant and thus could be disabled for production builds where you'd not want the performance loss.

    Leave a comment:


  • ciplogic
    replied
    Originally posted by Togga View Post
    I think this is an excellent choise. An interpreted high level language for quick prototyping / scripting / etc and then a portable lower level language for performance. .NET is an environment that isn't particularly good in either of these directions. You can also use these high level languages to auto-generate low-level code in C for instance and you will both have customized high abstractions and performance.
    If you agree that Ruby/Python are great in interpreter for prototyping, I can say that .Net fits the deal really great here:
    - Boo is an interpreted language if you use it with booi (and will get the "batteries" = a lot of utility libraries of .Net/Mono). It is Python inspired, have dynamic dyspatch. When you get to a form that you think you're happy, all you have to do is to disable the dynamic dispatch feature, and you get a boost of performance where the dyn-dispatch was running. Didn't you get enoug performance? Recompile the code with booc.exe.
    Still slow (like on startup): try to compile it ahead of time, or use Mono --llvm?
    - Still slow? Use C++ and call the code with P/Invoke for that critical loop (even you rarely fit the case to do it)
    You can use C# in this way too, and also you have an interpreter mode, so you may feel a bit of lag when you develop rarely (like 0.2-0.3 seconds to compile small/big functions, if you call a lot of code) as sometimes it does JIT as we go. (like here)
    In fact C# permits all at once: static compilation (like C++), JIT-ting, interpreting in REPL style where your code is interpreted but the typical code is JIT-ted so as you are likely to stay in a small patch of code, the exceptions and bit JIT times will not pop-up all the time.
    I benchmarked once, as a response that even some myths (i.e. slow startup of non-compiled languages) like shown here. (Mono apps MonoDevelop + Banshee in 20 seconds, C++ QtCreator+AmaroK ones in 25 seconds)
    May you give a circumstance where Mono or .Net it was too slow? May you give a piece code that runs so slow, that Mono doesn't worth considering?

    Leave a comment:


  • ciplogic
    replied
    Originally posted by Togga View Post
    An OS is a baggage only if your application targets an OS-less system, this is pretty rare today. You can often make away with dynamic libraries with a clever build system. Memory management can be replaced with a custom malloc/free on a static set of memory if you choose the C-route. If you drag in the.NET baggage, all these options are off. The fact is that .NET is mostly an unnecessary baggage that only brings restrictions to the table. All languages in there are restricted to the CLR and managed mode which unifies them to a point in which syntax is their only differentiator.
    (...)
    Additional runtime environments will eventually just be an additional burden when it comes to portability and restrictions. You should really choose them wisely, and typechecking of enums isn't even on the map to drag in Microsofts second citizen (the .NET environment).
    You say that C# (I think you refer to it) it brings only restrictions to the table. In fact every compiler/platform brings restrictions. For example, there is no C++ GEM (Ruby world "libraries in a package manager) support, and .Net (Visual Studio basically) have something primitive (NuGet). Unnecessary, did you mean about RTTI (oreflection!?). Syntax is not only diferentiator as far as I know: F# is fairly different from C# (even both end with #), and all are different with Boo. In fact there are instructions in MSIL (for example "Tail" which will make a tail call, that you cannot call in C#, even 4.0, but is specially used for functional languages). I can agree that C# got some of the features out of every language by a little (Linq a bit of functional programming, Generics a bit of meta-programming, dynamic a bit of dynamic dispatch, and so on), and Visual Basic was gotten convergent, but even with this, combining it with frameworks (like WPF, WCF, etc.) make a way to talk in real world, that are much more different than just a syntax sugar over MSIL (I mean BAML / Xaml world). Of course any of those decisions have choices, some are good some are bad, but overall, inside the .Net package you have a lot to get. Or in Java, or in Python. The idea of "batteries included" is maybe what you think that strikes you. I mean, what bothers you? The idea that .Net is too big? The lowest cost "laptop" I find in Spain with Windows, have a Windows that includes .Net 4 (http://www.pccomponentes.com/asus_ee...0gb_10_1_.html ) and it has bundled a 320 GB disk. Windows' disk usage, around 10 GB. And you compare a 40 MB package download, that the code include by default a lot of code that other components, like Paint.Net, would not have to download. So Paint.Net download package is just some MB (low number).
    At the end, I think that in a part, you are right: additional runtime environments bring restrictions. So you may use Cygwin and as cygwin provides Mono, you can target all Unixes with Mono programs. If you mean that C++ applications have any benefit to portability, compared to Mono, I'm curious of your platform to develop in, is it Qt? (which in itself is a platform, and as I worked professionaly with it, I can say that in some configurations is buggier than GTK+, in special with non-standard window-managers)


    The GL-problem, as I said, is easily solved with additional syntax checking tools/scripts and you will get both the type-check for each GL function and the leaner and nicer syntax given by C/C++ (compared to C#). If static type-checking of the compiler is your main priority, you should probably go for ADA or similar.
    (...)
    If static type checking it can be done just with a plugin. Don't you want this, and want a supported compiler-integrated checking? You can check Code-Contracts. As OpenTK is the de-facto wrapper for OpenGL, and gl/GL.h is the de-factor for C++, what you state is still a talk which is not yet proven.
    Are visual designers really a good way to do good UI:s? I'm provoking here, but the best UI:s I seen rarely come from visual tools. Compare for instance the output from LaTeX vs a manually written Word document. You can't be creative enough with visual tools and often both you and the tool get restricted by the tools. If you instead focus on the problem and the logic you can often dynamically create a better GUI, and you have a program where the GUI structure are not setting any restrictions.
    Yes, a lot of them are generated and are good ones. Are not optimized, if you mean the "best UI" means that they will not set twice a property. Delphi/Lazarus was one of the first one that was using generated out-of-a-database (TiOPF framework) UI. It may not be optimum, it can make two extra queries, but it is functional. If you want a good looking UI, I recommend to you either Flash (I'm not a fan of it, but I had seen things made incredible out of it), that it gets out of Photoshop or WPF. WPF supports theming, accelerated controls, a lot of things that are well made (like data binding, which is really hard to achieve without a dynamic language and good RTTI, Qt have something similar, did you tried the QML - a JavaScript interpreter on top of MOC classes).
    Restrictions in UI are mostly the norm, you don't want to break anything, that's why you would prefer a Visual Form Inheritance to work by default! Most UIs like OS X are restricted by code guidelines, that users care to have what they expect, not how imaginative is the designer.
    Yes, and OOP is a way of design and could be done in most languages without the syntactic sugar in the "OOP languages". Likewise with "Abstract programming", "functional programming", etc. If you aim for syntax, look for instance at how Vala abstracts all C-calls to the gobject system and still doesn't require any other runtime overhead than what their C library interface does (compile-time syntax sugar).
    I do not see any feature here that motivates a VM and the .NET API. You may initially have to go an extra mile in design but that is reusable and completely worth it.
    I think this is an excellent choise. An interpreted high level language for quick prototyping / scripting / etc and then a portable lower level language for performance. .NET is an environment that isn't particularly good in either of these directions. You can also use these high level languages to auto-generate low-level code in C for instance and you will both have customized high abstractions and performance.
    Did you ever looked the generated Vala code? Is mapping 1:1 to C? Or by runtime overhead, you mean, nothing else than GObject? And it is a bit buggy on Windows!? Or that it doesn't have annotations in the C# sense? And it doesn't have dynamic keyword, or PLinq.
    "I don't see any feature"... is fine. It still makes the point true: the abstractions pay a price a lot of times. Sometimes small, sometimes by making a language specification and understanding really huge (like C++). I mean, using a smart-phone forced us to think phones with "gestures", "post-PC era", and so on, which is really far far away from the phone that was barely digital (in the past I catched my grandma using a wheel to call to the numbers).
    And I do think that even by any standard, the basic phone is still good enough for many usages, and a smartphone can be sometimes too complex to grasp how to "install application from Marketplace". They make no sense in many ways, at least for basics. But when you get used, you simply cannot look back.
    Should we write only binary, as INI files or XML files are too slow to parse and are abstractions? Certainly the binary files are faster. Should we write for saving data our back-store database and not using a database? Imagine, most databases use Inter-Process communications, and the ones that don't are somewhat slow, even the performant ones, use a query engine that sometimes is using a JIT (I'm saying about Oracle query planner that uses a Java VM to optimize and generate dynamically the query conditions, using HotSpot Server).
    At the end, should we all use Windows 95/NT 4.0 !? It is small (I remember that Win95 was like 40 MB on disk, Win 95 OSR 2 like 120M, NT 4 was like 100+ M), it should start instantly on today's hardware, had 64 bit support (only NT), no multiple runtimes to support, so no headaches. It did not had a (functional) browser control so no hassles with incomatible HTML. In fact we would not need ClearType, 3D Aero, and a 512 MB OS usage, when all those 3 OSes would work just fine with just 32 MB, and 64 MB would be the high end machines of their times.
    Last edited by ciplogic; 27 February 2012, 06:57 PM.

    Leave a comment:


  • Togga
    replied
    Originally posted by ciplogic View Post
    And at the end you drag a lot of things either way: the OS libraries are in a lot of cases a lot of "baggage". Memory management, virtual memory allocation, dynamic library mapping.
    An OS is a baggage only if your application targets an OS-less system, this is pretty rare today. You can often make away with dynamic libraries with a clever build system. Memory management can be replaced with a custom malloc/free on a static set of memory if you choose the C-route. If you drag in the.NET baggage, all these options are off. The fact is that .NET is mostly an unnecessary baggage that only brings restrictions to the table. All languages in there are restricted to the CLR and managed mode which unifies them to a point in which syntax is their only differentiator.

    If you take a practical look at the systems out there almost all of them brings a C interface and a C standard library. For portability this is the obvious way to go. This buys you to be able to use new instruction-sets etc when new types of hardware arrives. Of course C will not take you all the way since the abstractions will not automatically be optimized for different types of HW (multiple cores, vector instructions, SPE:s etc). Here you need help out with higher order programs that can generate optimal solutions or find some other clever way like compile-time optimizations (see for instance ATLAS).

    Additional runtime environments will eventually just be an additional burden when it comes to portability and restrictions. You should really choose them wisely, and typechecking of enums isn't even on the map to drag in Microsofts second citizen (the .NET environment).

    Originally posted by ciplogic View Post
    but practice shows that the C++ headers of OpenGL are not that "competent"
    The GL-problem, as I said, is easily solved with additional syntax checking tools/scripts and you will get both the type-check for each GL function and the leaner and nicer syntax given by C/C++ (compared to C#). If static type-checking of the compiler is your main priority, you should probably go for ADA or similar.

    Originally posted by ciplogic View Post
    far as I used there are just two visual designers in C++ world that would match a decent usage
    Are visual designers really a good way to do good UI:s? I'm provoking here, but the best UI:s I seen rarely come from visual tools. Compare for instance the output from LaTeX vs a manually written Word document. You can't be creative enough with visual tools and often both you and the tool get restricted by the tools. If you instead focus on the problem and the logic you can often dynamically create a better GUI, and you have a program where the GUI structure are not setting any restrictions.


    Originally posted by ciplogic View Post
    At the end, people lived before OOP
    Yes, and OOP is a way of design and could be done in most languages without the syntactic sugar in the "OOP languages". Likewise with "Abstract programming", "functional programming", etc. If you aim for syntax, look for instance at how Vala abstracts all C-calls to the gobject system and still doesn't require any other runtime overhead than what their C library interface does (compile-time syntax sugar).

    Originally posted by ciplogic View Post
    concepts (in C# I am used to write Generics + constraints, are really great to use), full RTTI, (maybe) a GC, dynamic dispatch.
    I do not see any feature here that motivates a VM and the .NET API. You may initially have to go an extra mile in design but that is reusable and completely worth it.

    Originally posted by ciplogic View Post
    I used Ruby and if it would be a 20-30% slower than .Net, I would use it straight away (there is, and I plan to use it, is name Mirah, but I have sometimes quirks compiling it). If is to compare a better language, Ruby would be likely be it, maybe Python.
    I think this is an excellent choise. An interpreted high level language for quick prototyping / scripting / etc and then a portable lower level language for performance. .NET is an environment that isn't particularly good in either of these directions. You can also use these high level languages to auto-generate low-level code in C for instance and you will both have customized high abstractions and performance.
    Last edited by Togga; 27 February 2012, 05:27 PM. Reason: various

    Leave a comment:


  • BlackStar
    replied
    I think that C# is as good as it is a decent language to use, that in some ways got matched with some C++11 changes. When I go to C++ I notice features that are missing: concepts (in C# I am used to write Generics + constraints, are really great to use), full RTTI, (maybe) a GC, dynamic dispatch. It would also not hurt if the compilation of C++ code would be faster so big projects can iterate much more often, to have more or less standardized UI (maybe on top of Qt). I don't understand the idea that if for a loop C++ works (supposedly every time) 2 times faster than C# (or Java), to change all code to use C++ as most of applications would not notice the difference.
    (R)amen!

    Do note that C++11 paves the way for full GC implementations in later versions. Right now it says that orphan objects may be automatically reclaimed by a GC (left to the implementation) and a future iteration will specify the exact GC behavior.

    I would also love to see a proper module system. This was discussed heavily for C++11 but, in the end, they decided to postpone it for a future version - it's a complicated SoB to design. Not only would it reduce the insanity of the compiler, which compiles the every single templated class n-times (every time you use it), but it would also improve code reuse, interoperability, modularity and compilation times. The lack of a module system is by the far the biggest issue with C++ right now.

    At the end, people lived before OOP, so why do we need virtual table with pointers to be decorated with classes? We lived before there was any graphic drivers and we wrote pixels directly in video memory, or we were using "uses BGI; " world and we were flipping parts of the screen to simulate animations. We lived even before C with "types", and we were using assembly. Assembly have close to no baggage (http://flatassembler.net/download.php )
    Not only that, but I am old enough to remember the same arguments being used against C++: Who needs C++ when you have C? C++ is bloated crap (500KB for a simple hello-world program!) C produces faster code. C compiles faster. It doesn't have an ABI. You can always write OO with C structs and function pointers, so why drag the whole C++ garbage just for that? Besides, virtual functions are too slow. Blah blah blah.

    Funny how these people are always proven wrong in the end.

    Leave a comment:


  • ciplogic
    replied
    Originally posted by Togga View Post
    Your C#-code isn't particularly beautiful and dragging in the whole .NET framework for code syntax checks is incompetent at best. This example is easy to do achieve with a little class in c++ if you ever should want to.
    So your point is that: is it easy to... but practice shows that the C++ headers of OpenGL are not that "competent". And at the end you drag a lot of things either way: the OS libraries are in a lot of cases a lot of "baggage". Memory management, virtual memory allocation, dynamic library mapping (sometimes even with remapping of a library to another library, that's why there is WinSxS folder and is the biggest folder from your modern version of Windows). The RTTI (which is minimal at best) can be simulated fairly easy just with some macros (MFC did this), or compiler extensions (ex-Borland now Embarcadero did that), or an meta-object compiler (in Qt). The same is about a lot of other things that people are used to get for granted in a VM world, and is fairly lacking in the C++ part: good tools of static analysis code (Eclipse and Netbeans have primitive yet functional for Java language, or if you want a better tool you would want to use IDEA, C# world have Resharper, CodeRush and JustCode), great designer (as far as I used there are just two visual designers in C++ world that would match a decent usage, maybe 3: QtCreator, C++ Builder and WxDesigner!? ). In .Net/Java world they are much more consolidated and you know that people have skillset with WPF, it can be used with Silverlight to some extent.
    At the end, people lived before OOP, so why do we need virtual table with pointers to be decorated with classes? We lived before there was any graphic drivers and we wrote pixels directly in video memory, or we were using "uses BGI; " world and we were flipping parts of the screen to simulate animations. We lived even before C with "types", and we were using assembly. Assembly have close to no baggage (http://flatassembler.net/download.php )
    I think that C# is as good as it is a decent language to use, that in some ways got matched with some C++11 changes. When I go to C++ I notice features that are missing: concepts (in C# I am used to write Generics + constraints, are really great to use), full RTTI, (maybe) a GC, dynamic dispatch. It would also not hurt if the compilation of C++ code would be faster so big projects can iterate much more often, to have more or less standardized UI (maybe on top of Qt). I don't understand the idea that if for a loop C++ works (supposedly every time) 2 times faster than C# (or Java), to change all code to use C++ as most of applications would not notice the difference.
    AngryBirds was (at least originally) written on Android in Java/Android and was good enough for many (at least from the moment that a JIT was a part of Android platform).
    I used Ruby and if it would be a 20-30% slower than .Net, I would use it straight away (there is, and I plan to use it, is name Mirah, but I have sometimes quirks compiling it). If is to compare a better language, Ruby would be likely be it, maybe Python. C++ can do some things that C# can do, and most of things that Ruby does, C# does too, but Ruby does it with a compactness and usability in mind. This is where C++ always suffered: made from engineers to engineers. Ruby was made not by a programmer and it shows!
    Last edited by ciplogic; 27 February 2012, 04:45 AM.

    Leave a comment:


  • Togga
    replied
    Originally posted by BlackStar View Post
    Btw, there's a very interesting project that uses runtime code generation to bridge the gap between C++ and Mono applications. Subclassing and instatiating C++ classes directly from IronPython? Win!
    That's no win at all. I can't see any purpose .NET would fill in this contest. Just lots of unnecessary baggage.


    Originally posted by BlackStar View Post
    1. C++ enums do not introduce a namespace.
    Code:
    enum Foo { Bar }
    This is accessed as "Bar", not "Foo::Bar" which is a nightmare for large APIs with several thousand enums - like OpenGL.
    Code:
    namespace xxyy {
        enum zzz {
            bbb;
        };
    };
    is accessed xxyy::bbb;
    Code:
    class C {
        enum bbb { ccc };
    };
    This is generally a no-issue since you can always do namespaces with prefixes.. And you can make your code checkers enforce this.

    Code:
    enum zzz {
       zzz_bbb = 10;
    };
    Originally posted by BlackStar View Post
    // C#: void Enable(EnableCap)
    for (int i = 0; i < 8; i++) {
    GL.Enable(EnableCap.Light0 + i); // beautiful
    }
    [/code]
    Your C#-code isn't particularly beautiful and dragging in the whole .NET framework for code syntax checks is incompetent at best. This example is easy to do acheive with a little class in c++ if you ever should want to.

    Leave a comment:


  • BlackStar
    replied
    Originally posted by Togga View Post
    Interesting. Are you talking boost.python here? Is this working well even across compilers?

    I prefer to interface Python and C since C++ binary format vary between compilers (the code behind the C interface could optionally be C++ though.
    I haven't tried boost.python because their documentation lists something like Python 2.2 as supported with no mention of modern versions. What I do is extern "C" the public API and consume that instead.

    Btw, there's a very interesting project that uses runtime code generation to bridge the gap between C++ and Mono applications. Subclassing and instatiating C++ classes directly from IronPython? Win!

    Not really, given that you use enums instead of defines they can be strictly checked at compile-time in both C++ and C.
    If only it were that simple.

    1. C++ enums do not introduce a namespace.
    Code:
    enum Foo { Bar }
    This is accessed as "Bar", not "Foo::Bar" which is a nightmare for large APIs with several thousand enums - like OpenGL.

    2. The usual workaround for #1 is to embed an anonymous enum into a struct:
    Code:
    struct Foo { enum { Bar }; }
    It is now legal to type "Foo::Bar", but you instantly lose any compile-time type checking - you can only access Foo::Bar through an implicit cast-to-int.

    3. The workaround for #2 is to use a named enum instead:
    Code:
    struct Foo { enum Values { Bar }; }
    . This works with one caveat: it doesn't cover APIs with enum arithmetic (i.e. most of them). OpenGL, for instance:
    Code:
    // original: void glEnable(int)
    for (int i = 0; i < 8; i++) {
        glEnable(GL_LIGHT0 + i); // no type-checking
    }
    
    // C++ enums: void Enable(EnableCap::Values)
    for (int i = 0; i < 8; i++) {
        GL::Enable(EnableCap::Light0 + i); // error: cannot convert int to EnableCap::Values implicitly
        GL::Enable((EnableCap::Values)(EnableCap::Light0 + i)); // works, but ugly and not discoverable
    
    // C#: void Enable(EnableCap)
    for (int i = 0; i < 8; i++) {
        GL.Enable(EnableCap.Light0 + i); // beautiful
    }
    4. The workaround for #3 is to introduce a templated type that supports the operations we need without sacrificing type-safety and readability. The downside? (a) Need to write more code and (b) enum values are not constants, but are allocated on the stack instead. A good optimizing compiler should be able to eliminate the performance penalty, but we are now entering the dark magic zone - and all this just to get a clean, type-safe API design.

    In fact, there was a boost discussion on this very issue, looking for potential solutions. C++11 solves most of these issues, but the compiler support just isn't there yet.

    Leave a comment:


  • Togga
    replied
    Originally posted by BlackStar View Post
    C++ with Python is pretty awesome, indeed.
    Interesting. Are you talking boost.python here? Is this working well even across compilers?

    I prefer to interface Python and C since C++ binary format vary between compilers (the code behind the C interface could optionally be C++ though.


    Originally posted by BlackStar View Post
    This is actually a case where C# is strictly superior to C++ both in performance and functionality. OpenGL code is a joy to ride in C# (just try it!), which is kind of important for game programming.
    Not really, given that you use enums instead of defines they can be strictly checked at compile-time in both C++ and C. This is definitely not a scenario for which you drag in a .NET dependency to your project.

    Leave a comment:


  • Togga
    replied
    Originally posted by ciplogic View Post
    They are slow most of the time, they give a lot of pressure on memory allocator (either is it a GC or not),
    Okay. It was not obvious that you were talking about specific string helper objects implementation, strings in general are pretty common and hard to avoid. C++11 STL should be pretty decent now with move-semantics and if you at constructor time reserve memory if you know the string is large.

    Leave a comment:

Working...
X