Announcement

Collapse
No announcement yet.

Miguel de Icaza Calls For More Mono, C# Games

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by ciplogic View Post
    And at the end you drag a lot of things either way: the OS libraries are in a lot of cases a lot of "baggage". Memory management, virtual memory allocation, dynamic library mapping.
    An OS is a baggage only if your application targets an OS-less system, this is pretty rare today. You can often make away with dynamic libraries with a clever build system. Memory management can be replaced with a custom malloc/free on a static set of memory if you choose the C-route. If you drag in the.NET baggage, all these options are off. The fact is that .NET is mostly an unnecessary baggage that only brings restrictions to the table. All languages in there are restricted to the CLR and managed mode which unifies them to a point in which syntax is their only differentiator.

    If you take a practical look at the systems out there almost all of them brings a C interface and a C standard library. For portability this is the obvious way to go. This buys you to be able to use new instruction-sets etc when new types of hardware arrives. Of course C will not take you all the way since the abstractions will not automatically be optimized for different types of HW (multiple cores, vector instructions, SPE:s etc). Here you need help out with higher order programs that can generate optimal solutions or find some other clever way like compile-time optimizations (see for instance ATLAS).

    Additional runtime environments will eventually just be an additional burden when it comes to portability and restrictions. You should really choose them wisely, and typechecking of enums isn't even on the map to drag in Microsofts second citizen (the .NET environment).

    Originally posted by ciplogic View Post
    but practice shows that the C++ headers of OpenGL are not that "competent"
    The GL-problem, as I said, is easily solved with additional syntax checking tools/scripts and you will get both the type-check for each GL function and the leaner and nicer syntax given by C/C++ (compared to C#). If static type-checking of the compiler is your main priority, you should probably go for ADA or similar.

    Originally posted by ciplogic View Post
    far as I used there are just two visual designers in C++ world that would match a decent usage
    Are visual designers really a good way to do good UI:s? I'm provoking here, but the best UI:s I seen rarely come from visual tools. Compare for instance the output from LaTeX vs a manually written Word document. You can't be creative enough with visual tools and often both you and the tool get restricted by the tools. If you instead focus on the problem and the logic you can often dynamically create a better GUI, and you have a program where the GUI structure are not setting any restrictions.


    Originally posted by ciplogic View Post
    At the end, people lived before OOP
    Yes, and OOP is a way of design and could be done in most languages without the syntactic sugar in the "OOP languages". Likewise with "Abstract programming", "functional programming", etc. If you aim for syntax, look for instance at how Vala abstracts all C-calls to the gobject system and still doesn't require any other runtime overhead than what their C library interface does (compile-time syntax sugar).

    Originally posted by ciplogic View Post
    concepts (in C# I am used to write Generics + constraints, are really great to use), full RTTI, (maybe) a GC, dynamic dispatch.
    I do not see any feature here that motivates a VM and the .NET API. You may initially have to go an extra mile in design but that is reusable and completely worth it.

    Originally posted by ciplogic View Post
    I used Ruby and if it would be a 20-30% slower than .Net, I would use it straight away (there is, and I plan to use it, is name Mirah, but I have sometimes quirks compiling it). If is to compare a better language, Ruby would be likely be it, maybe Python.
    I think this is an excellent choise. An interpreted high level language for quick prototyping / scripting / etc and then a portable lower level language for performance. .NET is an environment that isn't particularly good in either of these directions. You can also use these high level languages to auto-generate low-level code in C for instance and you will both have customized high abstractions and performance.
    Last edited by Togga; 27 February 2012, 05:27 PM. Reason: various

    Comment


    • Originally posted by Togga View Post
      An OS is a baggage only if your application targets an OS-less system, this is pretty rare today. You can often make away with dynamic libraries with a clever build system. Memory management can be replaced with a custom malloc/free on a static set of memory if you choose the C-route. If you drag in the.NET baggage, all these options are off. The fact is that .NET is mostly an unnecessary baggage that only brings restrictions to the table. All languages in there are restricted to the CLR and managed mode which unifies them to a point in which syntax is their only differentiator.
      (...)
      Additional runtime environments will eventually just be an additional burden when it comes to portability and restrictions. You should really choose them wisely, and typechecking of enums isn't even on the map to drag in Microsofts second citizen (the .NET environment).
      You say that C# (I think you refer to it) it brings only restrictions to the table. In fact every compiler/platform brings restrictions. For example, there is no C++ GEM (Ruby world "libraries in a package manager) support, and .Net (Visual Studio basically) have something primitive (NuGet). Unnecessary, did you mean about RTTI (oreflection!?). Syntax is not only diferentiator as far as I know: F# is fairly different from C# (even both end with #), and all are different with Boo. In fact there are instructions in MSIL (for example "Tail" which will make a tail call, that you cannot call in C#, even 4.0, but is specially used for functional languages). I can agree that C# got some of the features out of every language by a little (Linq a bit of functional programming, Generics a bit of meta-programming, dynamic a bit of dynamic dispatch, and so on), and Visual Basic was gotten convergent, but even with this, combining it with frameworks (like WPF, WCF, etc.) make a way to talk in real world, that are much more different than just a syntax sugar over MSIL (I mean BAML / Xaml world). Of course any of those decisions have choices, some are good some are bad, but overall, inside the .Net package you have a lot to get. Or in Java, or in Python. The idea of "batteries included" is maybe what you think that strikes you. I mean, what bothers you? The idea that .Net is too big? The lowest cost "laptop" I find in Spain with Windows, have a Windows that includes .Net 4 (http://www.pccomponentes.com/asus_ee...0gb_10_1_.html ) and it has bundled a 320 GB disk. Windows' disk usage, around 10 GB. And you compare a 40 MB package download, that the code include by default a lot of code that other components, like Paint.Net, would not have to download. So Paint.Net download package is just some MB (low number).
      At the end, I think that in a part, you are right: additional runtime environments bring restrictions. So you may use Cygwin and as cygwin provides Mono, you can target all Unixes with Mono programs. If you mean that C++ applications have any benefit to portability, compared to Mono, I'm curious of your platform to develop in, is it Qt? (which in itself is a platform, and as I worked professionaly with it, I can say that in some configurations is buggier than GTK+, in special with non-standard window-managers)


      The GL-problem, as I said, is easily solved with additional syntax checking tools/scripts and you will get both the type-check for each GL function and the leaner and nicer syntax given by C/C++ (compared to C#). If static type-checking of the compiler is your main priority, you should probably go for ADA or similar.
      (...)
      If static type checking it can be done just with a plugin. Don't you want this, and want a supported compiler-integrated checking? You can check Code-Contracts. As OpenTK is the de-facto wrapper for OpenGL, and gl/GL.h is the de-factor for C++, what you state is still a talk which is not yet proven.
      Are visual designers really a good way to do good UI:s? I'm provoking here, but the best UI:s I seen rarely come from visual tools. Compare for instance the output from LaTeX vs a manually written Word document. You can't be creative enough with visual tools and often both you and the tool get restricted by the tools. If you instead focus on the problem and the logic you can often dynamically create a better GUI, and you have a program where the GUI structure are not setting any restrictions.
      Yes, a lot of them are generated and are good ones. Are not optimized, if you mean the "best UI" means that they will not set twice a property. Delphi/Lazarus was one of the first one that was using generated out-of-a-database (TiOPF framework) UI. It may not be optimum, it can make two extra queries, but it is functional. If you want a good looking UI, I recommend to you either Flash (I'm not a fan of it, but I had seen things made incredible out of it), that it gets out of Photoshop or WPF. WPF supports theming, accelerated controls, a lot of things that are well made (like data binding, which is really hard to achieve without a dynamic language and good RTTI, Qt have something similar, did you tried the QML - a JavaScript interpreter on top of MOC classes).
      Restrictions in UI are mostly the norm, you don't want to break anything, that's why you would prefer a Visual Form Inheritance to work by default! Most UIs like OS X are restricted by code guidelines, that users care to have what they expect, not how imaginative is the designer.
      Yes, and OOP is a way of design and could be done in most languages without the syntactic sugar in the "OOP languages". Likewise with "Abstract programming", "functional programming", etc. If you aim for syntax, look for instance at how Vala abstracts all C-calls to the gobject system and still doesn't require any other runtime overhead than what their C library interface does (compile-time syntax sugar).
      I do not see any feature here that motivates a VM and the .NET API. You may initially have to go an extra mile in design but that is reusable and completely worth it.
      I think this is an excellent choise. An interpreted high level language for quick prototyping / scripting / etc and then a portable lower level language for performance. .NET is an environment that isn't particularly good in either of these directions. You can also use these high level languages to auto-generate low-level code in C for instance and you will both have customized high abstractions and performance.
      Did you ever looked the generated Vala code? Is mapping 1:1 to C? Or by runtime overhead, you mean, nothing else than GObject? And it is a bit buggy on Windows!? Or that it doesn't have annotations in the C# sense? And it doesn't have dynamic keyword, or PLinq.
      "I don't see any feature"... is fine. It still makes the point true: the abstractions pay a price a lot of times. Sometimes small, sometimes by making a language specification and understanding really huge (like C++). I mean, using a smart-phone forced us to think phones with "gestures", "post-PC era", and so on, which is really far far away from the phone that was barely digital (in the past I catched my grandma using a wheel to call to the numbers).
      And I do think that even by any standard, the basic phone is still good enough for many usages, and a smartphone can be sometimes too complex to grasp how to "install application from Marketplace". They make no sense in many ways, at least for basics. But when you get used, you simply cannot look back.
      Should we write only binary, as INI files or XML files are too slow to parse and are abstractions? Certainly the binary files are faster. Should we write for saving data our back-store database and not using a database? Imagine, most databases use Inter-Process communications, and the ones that don't are somewhat slow, even the performant ones, use a query engine that sometimes is using a JIT (I'm saying about Oracle query planner that uses a Java VM to optimize and generate dynamically the query conditions, using HotSpot Server).
      At the end, should we all use Windows 95/NT 4.0 !? It is small (I remember that Win95 was like 40 MB on disk, Win 95 OSR 2 like 120M, NT 4 was like 100+ M), it should start instantly on today's hardware, had 64 bit support (only NT), no multiple runtimes to support, so no headaches. It did not had a (functional) browser control so no hassles with incomatible HTML. In fact we would not need ClearType, 3D Aero, and a 512 MB OS usage, when all those 3 OSes would work just fine with just 32 MB, and 64 MB would be the high end machines of their times.
      Last edited by ciplogic; 27 February 2012, 06:57 PM.

      Comment


      • Originally posted by Togga View Post
        I think this is an excellent choise. An interpreted high level language for quick prototyping / scripting / etc and then a portable lower level language for performance. .NET is an environment that isn't particularly good in either of these directions. You can also use these high level languages to auto-generate low-level code in C for instance and you will both have customized high abstractions and performance.
        If you agree that Ruby/Python are great in interpreter for prototyping, I can say that .Net fits the deal really great here:
        - Boo is an interpreted language if you use it with booi (and will get the "batteries" = a lot of utility libraries of .Net/Mono). It is Python inspired, have dynamic dyspatch. When you get to a form that you think you're happy, all you have to do is to disable the dynamic dispatch feature, and you get a boost of performance where the dyn-dispatch was running. Didn't you get enoug performance? Recompile the code with booc.exe.
        Still slow (like on startup): try to compile it ahead of time, or use Mono --llvm?
        - Still slow? Use C++ and call the code with P/Invoke for that critical loop (even you rarely fit the case to do it)
        You can use C# in this way too, and also you have an interpreter mode, so you may feel a bit of lag when you develop rarely (like 0.2-0.3 seconds to compile small/big functions, if you call a lot of code) as sometimes it does JIT as we go. (like here)
        In fact C# permits all at once: static compilation (like C++), JIT-ting, interpreting in REPL style where your code is interpreted but the typical code is JIT-ted so as you are likely to stay in a small patch of code, the exceptions and bit JIT times will not pop-up all the time.
        I benchmarked once, as a response that even some myths (i.e. slow startup of non-compiled languages) like shown here. (Mono apps MonoDevelop + Banshee in 20 seconds, C++ QtCreator+AmaroK ones in 25 seconds)
        May you give a circumstance where Mono or .Net it was too slow? May you give a piece code that runs so slow, that Mono doesn't worth considering?

        Comment


        • Originally posted by BlackStar View Post
          Hey, an actual language discussion without a flamewar! Who would expect that on Phoronix?
          Heh this thread just keeps on trucking it seems, and yes you are right it's been uncommonly civil here which is a very nice surprise

          Originally posted by BlackStar View Post
          Haven't tried Go yet, but I've read its samples and I wasn't too impressed (like I was with Ocaml and its F# offspring, for instance). What is its "killer" feature that would convert people?
          Well, as I said I've just looked over the language as of yet and made some short snippets to get a feel of it so I'm really not qualified to make that judgement, but for me personally the big reason I found the language interesting was that it had built-in concurrency properties (goroutines, channels).

          Originally posted by BlackStar View Post
          The compiler will happily accept both versions and the error will go unnoticed unless you call glGetError() - which most people don't do, since it kills performance.
          Well you could put the glGetError() call in a macro which is expanded into testing/logging the error or not based on a defined constant and thus could be disabled for production builds where you'd not want the performance loss.

          Comment


          • Originally posted by ciplogic View Post
            - Boo is an interpreted language if you use it with booi (and will get the "batteries" = a lot of utility libraries of .Net/Mono).
            Well, this is not an option for all reasons named in previous posts. If I want performance, I certaintly not choosing .NET. If I want existing Python code to go faster, i'd use PyPy or compile to native through it's LLVM backend. Both are better than dragging in a VM with a in practice non portable framework. As I said before, when I want performance, i'll go directly native with full control of the situation. I've yet not failed to get the batteries I need in C/C++.

            Originally posted by ciplogic View Post
            It is Python inspired
            Python inspired does not give all batteries Python have (like scipy/numpy). Going from one virtual machine to another doesn't add any value. Python is not my choise for performance.

            Originally posted by ciplogic View Post
            May you give a circumstance where Mono or .Net it was too slow? May you give a piece code that runs so slow, that Mono doesn't worth considering?
            Mono (and .NET) is just an unnecessary baggage which don't bring me anything of value. If I would target client applications on the web (possible valid cause for a VM), Java/javascript is the king out there and html5 is the future.

            Syntactic sugar doesn't solve any real world problems.

            Comment


            • Originally posted by ciplogic View Post
              You say that C# (I think you refer to it) it brings only restrictions to the table.
              No, I mean .NET and the CLR, MSIL and all restrictions they impose on you along with the baggage you have to drag with you which platform you choose to target.


              Originally posted by ciplogic View Post
              In fact there are instructions in MSIL (for example "Tail" which will make a tail call, that you cannot call in C#, even 4.0, but is specially used for functional languages).
              Here you mention one of them, the restriction to MSIL. Going native, you don't have any such restrictions and could go creative all the way down to the core.

              Originally posted by ciplogic View Post
              In fact every compiler/platform brings restrictions.
              Sure. You only have to watch out for what gives you the least restrictions and/or the most possibilities. .NET for instance is itself not written in .NET.


              Originally posted by ciplogic View Post
              what bothers you? The idea that .Net is too big?
              It just don't add any value to me, just restrictions and unnecessary baggage. Bringing in a VM should be a separate architectural decision for a product not depending on "nice syntax" etc.


              Originally posted by ciplogic View Post
              If you mean that C++ applications have any benefit to portability, compared to Mono, I'm curious of your platform to develop in, is it Qt?
              QT is a really good framework even though I think that a pure C API is cleaner and that they put a little too much features under their umbrella. If you separate logic from UI well enough the UI could be anything from a script to a web-page or even automatically generated using any framework as a backend.


              Originally posted by ciplogic View Post
              Yes, a lot of them are generated and are good ones. Are not optimized, if you mean the "best UI" means that they will not set twice a property.
              I mean "UI" from a user perspective.

              Originally posted by ciplogic View Post
              Did you ever looked the generated Vala code? Is mapping 1:1 to C? Or by runtime overhead, you mean, nothing else than GObject? And it is a bit buggy on Windows!? Or that it doesn't have annotations in the C# sense? And it doesn't have dynamic keyword, or PLinq.
              Yes. As with .NET, it will not be more stable than the underlying libraries that is putting restrictions on the system. My point was that this language is not having the overhead of a virtual machine to get all features and works, wherever C works. There is a big difference getting gobject library to compile on a new platform than bring over .NET there.

              I don't have much Vala-coding experience myself but I have good experience with some applications written in Vala in terms of performance and resource consumption. See for instance Shotwell : http://yorba.org/shotwell/


              Originally posted by ciplogic View Post
              Should we write only binary ...
              Should we write for saving data our back-store database and not using a database? ...
              I'd say be creative and go with sound experience. If in doubt, go the route with the least restrictions (lets you move in either way), make smart simple designs (not inventing problems that aren't there) and library choises.
              Last edited by Togga; 28 February 2012, 03:59 PM.

              Comment


              • Originally posted by XorEaxEax
                Well, as I said I've just looked over the language as of yet and made some short snippets to get a feel of it so I'm really not qualified to make that judgement, but for me personally the big reason I found the language interesting was that it had built-in concurrency properties (goroutines, channels).
                Well, I've been toying around with Go these past few days and it has morphed into a pretty interesting little language.

                The lack of object hierarchies and implicit interface implementations are surprisingly nice features! I've long campaigned in favor of implicit interfaces (if my object implements Foo() I should be able to pass it to a method accepting IFoo) and the rationale in Go makes lots of sense (more modular code). Object composition is much better aligned with game code than deep object hierarchies, so that's a big win too.

                The only downside is that there's no MSIL compiler yet, so I can't easily combine it with my existing codebase (~1MLoC).

                Originally posted by Togga View Post
                Yes. As with .NET, it will not be more stable than the underlying libraries that is putting restrictions on the system. My point was that this language is not having the overhead of a virtual machine to get all features and works, wherever C works. There is a big difference getting gobject library to compile on a new platform than bring over .NET there.
                Not necessarily. You seem to be confusing the .Net BCL with the .Net VM. They are completely separate things. The .Net VM itself is written in C++, is self-contained and is quite portable - there are people who have ported it to the PSP, to Maemo and half a dozen other exotic platforms. The BCL, on the other hand, is huge. The core parts are written in C# and are portable, with some work, but other parts (like System.Windows) are inherently non-portable.

                Fortunately, you need very few things to write games: System, System.Core, System.Net and System.Xml, more or less. Add an OpenGL and an OpenAL binding and you have everything you need, in a tiny, portable package (~4MB compressed).

                It's not as if C++ doesn't have the same problem: by the time you add pthreads, tinyxml, sdl, a network library and a scripting library, you've reached the exact same overhead. This is stuff you need, one way or another - no matter the language.

                Unfortunately, most people think that writing a .Net game requires the installation of a huge VM with hundreds of megs of stuff. That's simply not true: 4MB is all you need.

                I don't have much Vala-coding experience myself but I have good experience with some applications written in Vala in terms of performance and resource consumption. See for instance Shotwell : http://yorba.org/shotwell/
                Vala is pretty nice. It's C# for people who want to use C# without the whole .Net package. This actually proves how nice a language C# really is, even if you don't like .Net itself.

                Comment


                • Originally posted by XorEaxEax View Post
                  Well you could put the glGetError() call in a macro which is expanded into testing/logging the error or not based on a defined constant and thus could be disabled for production builds where you'd not want the performance loss.
                  That's still runtime checking, not compiler checking. If I pass the wrong parameter to a function, I want to rely on the compiler to stop me. Waiting for some poor tester to report a random texture corruption bug a few days later is just not the same.

                  Comment


                  • Originally posted by Togga View Post
                    Mono (and .NET) is just an unnecessary baggage which don't bring me anything of value. If I would target client applications on the web (possible valid cause for a VM), Java/javascript is the king out there and html5 is the future.
                    Syntactic sugar doesn't solve any real world problems.
                    You're right (on syntax sugar). I do think that doesn't bring to you anything of value. The people that defend C#/Miguel/whatever are somewhat tired of the myths that grow big for emotional reasons (like: is Microsoft Project, or other matters like that). We don't hold sacred either C# or the its VM designs. In many instances we would use C/C++. If we need to make an application that makes an updater, is easier to take already made WebClient class, combine with Windows Forms framework and ask the user: a new update is available, do you want to download it? And on a separate thread to do the downloading of the upgrade. If you target original Windows XP or upper, .Net is already installed, and an application like it, would be like few KB of extra functionality. To do it in C++ would make it really harder. How would you download a file from internet to disk, if you know the URL? (I honestly don't know, call wget!? )
                    If you say about JS and its powers, I think that we also agree, that is the most desirable experience. In fact JS/Html5 is one of the examples that is mature yet is not so consistent as performance and support, as is Mono fares much better for that matter when compared with .Net. If you target a game, should it use CSS that is accelerated by IE9, or should you use WebGL? Should you use a computational kernel in JS to make a real-time game? Here is where JS still suffers (compared with Mono, the Firefox GC implementation does not have a too fast but soon it will GC, even being said that, it gets better and better).
                    JS is part of QML, which is an extra baggage too, yet it does it well enough to work with Symbian Nokia phones to desktop applications.
                    At the end: syntax sugar is just about abstractions. When using smart pointers, the boost::smart_ptr (if I recall it right), does increment/decrement references so the user will not miss the count. For skilled programmers, may appear to not be necessary, but when you work with multiple frameworks, is really hard to track all the references all around the application. Also many syntax sugars do solve some real world problems: Generics/templates enforce some typical mistakes that do user face. A for loop (a C one) sometimes is reverted by compiler, and some subexpressions are removed out of the loop (google for Loop invariant code motion), and the loop itself may be repeated many times to make easier auto-vectorizations and improve the branch prediction of a specific CPU. Some abstractions we are comfortable with (virtual ... = 0; code which some methods can be virtual and abstract like: virtual void Drive() = 0 { TurnWheel(); } ), or multiple inheritance, when other languages do not accept this. Which is better?
                    As for me is hard to choose, but anyone thinks different about what is better, so is hard to judge from outset of your particular skills that is great.
                    The last thing: "tail" MSIL instruction was just a power of the CLR, that made no sense for C++ world for that matter, or C# world (maybe a bit for Linq, but this is other matter), as is a functional language feature. Tail works just like "register" keyword in C, that would hint for compiler to try to put a variable into register.
                    Last edited by ciplogic; 29 February 2012, 09:48 AM.

                    Comment


                    • Originally posted by BlackStar View Post
                      Waiting for some poor tester to report a random texture corruption bug a few days later is just not the same.
                      Not really following this, you can create a macro which reports any glError codes during runtime together with which source file and line number the error took place on. You can also make it so that the checking can be disabled and thus not have any performance impact on final builds. Something like this:

                      Code:
                      #ifndef _GLERROR_H_
                      #define _GLERROR_H_
                      
                      #include <stdio.h>
                      #include <GL/gl.h>
                      
                      #define _GLERROR_ENABLED_ // comment out to disable glGetError() checking
                      
                      #ifdef _GLERROR_ENABLED_
                      #define GLERROR() { int foo = glGetError(); if(foo != GL_NO_ERROR) printf("GLError:%d in file:%s at line:%d",foo,__FILE__,__LINE__); } 
                      #else
                      #define GLERROR()
                      #endif
                      
                      #endif
                      This would catch faulty parameters during runtime, giving you the file and line in which they occured and not force you to wait for some tester to report some texture bug. Granted it's nicer if the compiler catches the error but it's not something I would switch language for.

                      Comment

                      Working...
                      X