Announcement

Collapse
No announcement yet.

Mono 2.8 Is Out With C# 4.0, Better Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • well, first of all my opinion is that mono is fine as it is aka a MS tech enviroment to released to novell to avoid some lawsuits or dark antitrust scenarios, that is being used for like 8 folks only on linux and since in linux is an OSS based os you are free to use what you want just let those 8 folks using that enviroment if they want too.

    i don't see the problem of mono existance since it could help developer coming from windows do some stuff and use more linux but let's be serious here mono aka C# will stay as it is for a long time aka beside few small apps it will stay in the dark corners of some distro here and there. the realistic chance that mono ever became a strong supported language/plataform in linux is less than 0%, for god sake not even novell has migrated to mono their own tools in their propieraty distros and if you ask me that is very LOL.

    my opinion about choosing an language because it has "better memory management" doesn't make sense, espetially a language born from the most crappy, crashy, unstable OS ever written(yes windows, yes even the 7). so i think you should refer that feature as "any crappy coder or idiot can put three variables togheter and not crash the app" and that is something that you can find in any interpreted language in some extent aka mono, java, python, php lol, etc. you are only experiencing a feature that is absolutely natural in any vm language backed up by the strong memory manager in unix based kernels and some cool gui, is not like mono invented it and for god sake .NET either since is not exactly bullet proof or good at handling memory leaking in windows either.

    about C++/QT crappy or hard or whatever memory management you called it,
    ok i admit that C/C++ can be tricky for newcomers or midlevel developers cuz you have many ways to handle memory statically and dinamically, but is about just that much, if you get segfaults all the time, dude plz spend some money in a good C/C++ book and maybe some course?? cuz you learn something in the road very very wrong.(especially when you say so, in a forum of unix like OS users, which uses almost exclusively C/C++ for everything, which means milions and millions of line of C/C++ code and still is considered in the most stable and reliable OSes outhere, even including almost full dominion of the 500 most powerful supercomputer in the world, yes is not us, it's you)

    btw, just as a question, why the hell you assume that get a segault signal is something wrong or evil? maybe you are confusing segfaults with crash from the windows world?, when you develop app on unix like OSes(not only linux but bsd, aix, hpux, etc) you have to understand that unlike windows the OS won't allow you to do as you please aka the OS set limits on what you can and cannot do and that is what signals mean aka you are doing something illegal and the app should be terminated to maintain the integrity of the rest of the OS cuz the developer did something wrong and that is the right way to handle processes in an OS like it or not(i admit it could be tricky at first but still is the right thing to do and the developers have to get used to it), so not matter what you do the OS will never let reference null pointer, the memory manager allow you to read any data in any memory adress but it will never let you write data on a memory sector not owned by you app, etc. unlike windows >.<. so sigvseg is not a crash, is just a polite way to inform you that you did something that the OS don't consider right.

    vme'd languages like mono are tied to the same restriction, it just handled by the core libraries in a more automatic way and in some cases using ugly hacks to prevent you app to sigsegv, in C/C++ you just need to be a bit more careful or just use a simple memory manager to keep track of your pointer to do the same thing without the performance penalty

    Comment


    • Originally posted by NoEffex View Post
      If there's a bug in the C++ compiler then it will crash.

      Basically, it's because there's more control. Simplicity = (Or has a lot to do with) Reliability 99999 times out of 100000
      LOL.

      I guess you compile all your C programs in debug mode to make them more reliable, then, huh? Because all those optimization passes complicate the code and compiler by about 1000 times more than a simple unoptimized compilation. You get more control as well, since the compiler isn't rewriting your code into a more optimized form, but just translating straight to assembly.

      Comment


      • Originally posted by smitty3268 View Post
        LOL.

        I guess you compile all your C programs in debug mode to make them more reliable, then, huh? Because all those optimization passes complicate the code and compiler by about 1000 times more than a simple unoptimized compilation. You get more control as well, since the compiler isn't rewriting your code into a more optimized form, but just translating straight to assembly.
        well i don't know how far you understand how compilation work in low level languages aka compilable languages but doesn't matter what option you pass to the compiler you use, errrm it always write assembly cuz the processor can't understand anything else <.< aka assembly is closest thing you will ever get to talk to the cpu keeping some sort of human readibility.

        let me explain it a bit, when you compile and begin to pass option to the compiler, it just like wrappers, aka automaton optimization that the compiler perform over the assembly output to improve certain aspect of the code like performance, memory aligment, plataform compatibility, compilation speed, etc so -g or debug just inform the compiler to avoid certain optimization and adjustments to the resulting assembly code to allow you to get better info from a debugger or a profiler or any other tool you want to use to perform some kind of check on the resulting code.

        so for example take the option -funroll-loops, this won't generate an alien type of super code or anything, you just inform you compiler to expand your loops (for/while/foreach,etc) so you avoid to have to do it manually in C/C++ code making more readeable and gain some performance due the loss of a freaking ton of assembly code required to make a loop work.

        mono does this exactly too but on the core libraries, aka your C# code doesnt talk to the cpu directly like C/C++/D/pascal/cobol/etc but instead is an intermediate representation that are translated to C or ASM or both by the core libraries of mono (same with windows .NET), that is why is know too as interpreted language and that is why you need a runtime set of tools, cuz the runtime and jit are the one that generate on the fly the actual C/ASM code to talk to the cpu and that is why they depend on LLVM aka a real C/C++ compiler.

        on the interpreted languages is easier to debug the code because your never deal with the cpu but how the core deals with the intermediate representation you writed in C# to create the respetive call or link to the actual libraries that will talk to the cpu and since those libraries are quite standard after some time of debugging from the core project you can easily predict when the coder is screwing things up or even do some last second optimization at the expense of a much lower runtime performance compared to C/C++ that goes straight to the cpu

        Comment


        • Originally posted by jrch2k8 View Post
          well i don't know how far you understand how compilation work in low level languages aka compilable languages but doesn't matter what option you pass to the compiler you use, errrm it always write assembly cuz the processor can't understand anything else <.< aka assembly is closest thing you will ever get to talk to the cpu keeping some sort of human readibility.

          let me explain it a bit, when you compile and begin to pass option to the compiler, it just like wrappers, aka automaton optimization that the compiler perform over the assembly output to improve certain aspect of the code like performance, memory aligment, plataform compatibility, compilation speed, etc so -g or debug just inform the compiler to avoid certain optimization and adjustments to the resulting assembly code to allow you to get better info from a debugger or a profiler or any other tool you want to use to perform some kind of check on the resulting code.

          so for example take the option -funroll-loops, this won't generate an alien type of super code or anything, you just inform you compiler to expand your loops (for/while/foreach,etc) so you avoid to have to do it manually in C/C++ code making more readeable and gain some performance due the loss of a freaking ton of assembly code required to make a loop work.

          mono does this exactly too but on the core libraries, aka your C# code doesnt talk to the cpu directly like C/C++/D/pascal/cobol/etc but instead is an intermediate representation that are translated to C or ASM or both by the core libraries of mono (same with windows .NET), that is why is know too as interpreted language and that is why you need a runtime set of tools, cuz the runtime and jit are the one that generate on the fly the actual C/ASM code to talk to the cpu and that is why they depend on LLVM aka a real C/C++ compiler.

          on the interpreted languages is easier to debug the code because your never deal with the cpu but how the core deals with the intermediate representation you writed in C# to create the respetive call or link to the actual libraries that will talk to the cpu and since those libraries are quite standard after some time of debugging from the core project you can easily predict when the coder is screwing things up or even do some last second optimization at the expense of a much lower runtime performance compared to C/C++ that goes straight to the cpu
          I know exactly how compilers work, I created a toy one once for a college class. Technically, the optimization passes generally operate on IR (usually a tree-structure) before the assembly is output at the end of the process.

          You are correct that i should have said "non-optimized" (-O0) rather than "debug" (-g). Other than that, i think we agree? Or else i'm missing the point of your comment.

          Every optimization pass is extra complexity in the compiler for it to possibly screw up, and less control in the hands of the original coder. This is obviously a very good thing, but exactly the opposite of what the original poster claimed would help C be more reliable than C++

          Comment


          • Originally posted by smitty3268 View Post
            I know exactly how compilers work, I created a toy one once for a college class. Technically, the optimization passes generally operate on IR (usually a tree-structure) before the assembly is output at the end of the process.

            You are correct that i should have said "non-optimized" (-O0) rather than "debug" (-g). Other than that, i think we agree? Or else i'm missing the point of your comment.

            Every optimization pass is extra complexity in the compiler for it to possibly screw up, and less control in the hands of the original coder. This is obviously a very good thing, but exactly the opposite of what the original poster claimed would help C be more reliable than C++
            i agree with you then XD, ofc that guy is wrong if he meant it that way, well the only thing that can theorically make C more reliable is the lack of the advanced memory management that is present in C++(among many other stuff ofc, this just seems the more drastic to me) and templates mmm even polymorph but i have years cranking up the compiler flags without much issues(ofc sometime require to improve some code here and there but that is expected)

            and at the end of the day C and C++ are old enough to avoid surprises , so once you code properly the binary quality is almost or the same these days, ofc unless you use a new flag that isnt old enough or tested enough to be safe but you can't blame the compiler then XD, so yes you are right is a dev problem not a compiler one these days

            well in the old days you could put gcc on it kness due some bugs, but since 4 series is really hard to find a bug on the compiler bad enough to produce garbage in the binary assuming the code is good writed ofc

            so sorry i did misundertand your comment XD my bad

            Comment


            • and at the end of the day C and C++ are old enough to avoid surprises
              Gentoo GCC 4.5 porting. I'm counting 116 bugs caused by a minor version change of a single compiler. God forbid that you should try with Clang or ICC instead.

              Or take a look at the C++ portability guide for Mozilla: "don't use static constructors", "don't use exceptions", "don't use RTTI", "be careful of variable declarations that require construction or initialization" - just read for yourself, there's some golden stuff. They forbid half the language in there, for god's shake!

              Is this how mature languages work? Because if so, just give me immature C# which I can compile with any compiler and have that single binary run on every supported OS *and* CPU architecture, without recompilation or strange compatibility issues. It just works (and I have done this time and time before). And if I need the extra speed, I'll inject native code into the hotspots and precompile the rest at installation time.

              It's amazing what a non-retarded language spec can do for you.

              PS: how's your C++ ABI feeling today?

              Comment


              • Originally posted by BlackStar View Post
                Damn, it's so extremely productive that one developer can trivially implement that which took Gimp around a decade to fix: dockable panels.
                When Gimp was around many of the current technologies was not in existance at plus that's something which can be done in QT/KDE easily... Your point?

                Comment


                • Originally posted by Apopas View Post
                  When Gimp was around many of the current technologies was not in existance at plus that's something which can be done in QT/KDE easily... Your point?
                  Let me break it down for you:
                  1. Gimp uses GTK+. Pinta uses GTK#.
                  2. Gimp will introduce dockable panels in 2.8, 13 years after its introduction and after huge pressure from the community. The implementation took 2 years (v2.6 was released without dockable panels in 2008, v2.7.1 was released with dockable panels in 2010).
                  3. Pinta introduced dockable panels in v0.4, 2 months after its introduction and 1 month after v0.3.

                  Two conclusions:
                  a. Gimp's developers were initially opposed to dockable panels.
                  b. Once they decided to implement them, they needed 2 years. Pinta needed 1 month (v0.3 vs v0.4).

                  In short, some UI features seem to be significantly easier to implement in Mono/GTK# compared to C/GTK+. Ergo, the former seems to be a more productive environment for UIs.

                  Comment


                  • Fake edit: single-window was initially commited to Gimp GIT in October 2009 (1 year after v2.6). It was first released to the public in v2.7.1 (2 years after v2.6).

                    Comment


                    • Originally posted by BlackStar View Post
                      Damn, it's so extremely productive that one developer can trivially implement that which took Gimp around a decade to fix: dockable panels.
                      It took them, so long, because nobody was interested to implement something like this afaik. Btw. isn't there a single windows Qt front end for GIMP?

                      Comment

                      Working...
                      X