Announcement

Collapse
No announcement yet.

Mono 2.8 Is Out With C# 4.0, Better Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • kraftman
    replied
    Originally posted by BlackStar View Post
    Is this how mature languages work? Because if so, just give me immature C# which I can compile with any compiler and have that single binary run on every supported OS *and* CPU architecture, without recompilation or strange compatibility issues. It just works (and I have done this time and time before). And if I need the extra speed, I'll inject native code into the hotspots and precompile the rest at installation time.
    It sounds like a call for newbies who can't program in C/C++, but mono allows them to make some shitty programs which suffers from huge memory leaks, terrible programming techniques and terrible performance. But hey, it just compile. ;P

    Leave a comment:


  • kraftman
    replied
    Originally posted by BlackStar View Post
    Damn, it's so extremely productive that one developer can trivially implement that which took Gimp around a decade to fix: dockable panels.
    It took them, so long, because nobody was interested to implement something like this afaik. Btw. isn't there a single windows Qt front end for GIMP?

    Leave a comment:


  • BlackStar
    replied
    Fake edit: single-window was initially commited to Gimp GIT in October 2009 (1 year after v2.6). It was first released to the public in v2.7.1 (2 years after v2.6).

    Leave a comment:


  • BlackStar
    replied
    Originally posted by Apopas View Post
    When Gimp was around many of the current technologies was not in existance at plus that's something which can be done in QT/KDE easily... Your point?
    Let me break it down for you:
    1. Gimp uses GTK+. Pinta uses GTK#.
    2. Gimp will introduce dockable panels in 2.8, 13 years after its introduction and after huge pressure from the community. The implementation took 2 years (v2.6 was released without dockable panels in 2008, v2.7.1 was released with dockable panels in 2010).
    3. Pinta introduced dockable panels in v0.4, 2 months after its introduction and 1 month after v0.3.

    Two conclusions:
    a. Gimp's developers were initially opposed to dockable panels.
    b. Once they decided to implement them, they needed 2 years. Pinta needed 1 month (v0.3 vs v0.4).

    In short, some UI features seem to be significantly easier to implement in Mono/GTK# compared to C/GTK+. Ergo, the former seems to be a more productive environment for UIs.

    Leave a comment:


  • Apopas
    replied
    Originally posted by BlackStar View Post
    Damn, it's so extremely productive that one developer can trivially implement that which took Gimp around a decade to fix: dockable panels.
    When Gimp was around many of the current technologies was not in existance at plus that's something which can be done in QT/KDE easily... Your point?

    Leave a comment:


  • BlackStar
    replied
    and at the end of the day C and C++ are old enough to avoid surprises
    Gentoo GCC 4.5 porting. I'm counting 116 bugs caused by a minor version change of a single compiler. God forbid that you should try with Clang or ICC instead.

    Or take a look at the C++ portability guide for Mozilla: "don't use static constructors", "don't use exceptions", "don't use RTTI", "be careful of variable declarations that require construction or initialization" - just read for yourself, there's some golden stuff. They forbid half the language in there, for god's shake!

    Is this how mature languages work? Because if so, just give me immature C# which I can compile with any compiler and have that single binary run on every supported OS *and* CPU architecture, without recompilation or strange compatibility issues. It just works (and I have done this time and time before). And if I need the extra speed, I'll inject native code into the hotspots and precompile the rest at installation time.

    It's amazing what a non-retarded language spec can do for you.

    PS: how's your C++ ABI feeling today?

    Leave a comment:


  • jrch2k8
    replied
    Originally posted by smitty3268 View Post
    I know exactly how compilers work, I created a toy one once for a college class. Technically, the optimization passes generally operate on IR (usually a tree-structure) before the assembly is output at the end of the process.

    You are correct that i should have said "non-optimized" (-O0) rather than "debug" (-g). Other than that, i think we agree? Or else i'm missing the point of your comment.

    Every optimization pass is extra complexity in the compiler for it to possibly screw up, and less control in the hands of the original coder. This is obviously a very good thing, but exactly the opposite of what the original poster claimed would help C be more reliable than C++
    i agree with you then XD, ofc that guy is wrong if he meant it that way, well the only thing that can theorically make C more reliable is the lack of the advanced memory management that is present in C++(among many other stuff ofc, this just seems the more drastic to me) and templates mmm even polymorph but i have years cranking up the compiler flags without much issues(ofc sometime require to improve some code here and there but that is expected)

    and at the end of the day C and C++ are old enough to avoid surprises , so once you code properly the binary quality is almost or the same these days, ofc unless you use a new flag that isnt old enough or tested enough to be safe but you can't blame the compiler then XD, so yes you are right is a dev problem not a compiler one these days

    well in the old days you could put gcc on it kness due some bugs, but since 4 series is really hard to find a bug on the compiler bad enough to produce garbage in the binary assuming the code is good writed ofc

    so sorry i did misundertand your comment XD my bad

    Leave a comment:


  • smitty3268
    replied
    Originally posted by jrch2k8 View Post
    well i don't know how far you understand how compilation work in low level languages aka compilable languages but doesn't matter what option you pass to the compiler you use, errrm it always write assembly cuz the processor can't understand anything else <.< aka assembly is closest thing you will ever get to talk to the cpu keeping some sort of human readibility.

    let me explain it a bit, when you compile and begin to pass option to the compiler, it just like wrappers, aka automaton optimization that the compiler perform over the assembly output to improve certain aspect of the code like performance, memory aligment, plataform compatibility, compilation speed, etc so -g or debug just inform the compiler to avoid certain optimization and adjustments to the resulting assembly code to allow you to get better info from a debugger or a profiler or any other tool you want to use to perform some kind of check on the resulting code.

    so for example take the option -funroll-loops, this won't generate an alien type of super code or anything, you just inform you compiler to expand your loops (for/while/foreach,etc) so you avoid to have to do it manually in C/C++ code making more readeable and gain some performance due the loss of a freaking ton of assembly code required to make a loop work.

    mono does this exactly too but on the core libraries, aka your C# code doesnt talk to the cpu directly like C/C++/D/pascal/cobol/etc but instead is an intermediate representation that are translated to C or ASM or both by the core libraries of mono (same with windows .NET), that is why is know too as interpreted language and that is why you need a runtime set of tools, cuz the runtime and jit are the one that generate on the fly the actual C/ASM code to talk to the cpu and that is why they depend on LLVM aka a real C/C++ compiler.

    on the interpreted languages is easier to debug the code because your never deal with the cpu but how the core deals with the intermediate representation you writed in C# to create the respetive call or link to the actual libraries that will talk to the cpu and since those libraries are quite standard after some time of debugging from the core project you can easily predict when the coder is screwing things up or even do some last second optimization at the expense of a much lower runtime performance compared to C/C++ that goes straight to the cpu
    I know exactly how compilers work, I created a toy one once for a college class. Technically, the optimization passes generally operate on IR (usually a tree-structure) before the assembly is output at the end of the process.

    You are correct that i should have said "non-optimized" (-O0) rather than "debug" (-g). Other than that, i think we agree? Or else i'm missing the point of your comment.

    Every optimization pass is extra complexity in the compiler for it to possibly screw up, and less control in the hands of the original coder. This is obviously a very good thing, but exactly the opposite of what the original poster claimed would help C be more reliable than C++

    Leave a comment:


  • jrch2k8
    replied
    Originally posted by smitty3268 View Post
    LOL.

    I guess you compile all your C programs in debug mode to make them more reliable, then, huh? Because all those optimization passes complicate the code and compiler by about 1000 times more than a simple unoptimized compilation. You get more control as well, since the compiler isn't rewriting your code into a more optimized form, but just translating straight to assembly.
    well i don't know how far you understand how compilation work in low level languages aka compilable languages but doesn't matter what option you pass to the compiler you use, errrm it always write assembly cuz the processor can't understand anything else <.< aka assembly is closest thing you will ever get to talk to the cpu keeping some sort of human readibility.

    let me explain it a bit, when you compile and begin to pass option to the compiler, it just like wrappers, aka automaton optimization that the compiler perform over the assembly output to improve certain aspect of the code like performance, memory aligment, plataform compatibility, compilation speed, etc so -g or debug just inform the compiler to avoid certain optimization and adjustments to the resulting assembly code to allow you to get better info from a debugger or a profiler or any other tool you want to use to perform some kind of check on the resulting code.

    so for example take the option -funroll-loops, this won't generate an alien type of super code or anything, you just inform you compiler to expand your loops (for/while/foreach,etc) so you avoid to have to do it manually in C/C++ code making more readeable and gain some performance due the loss of a freaking ton of assembly code required to make a loop work.

    mono does this exactly too but on the core libraries, aka your C# code doesnt talk to the cpu directly like C/C++/D/pascal/cobol/etc but instead is an intermediate representation that are translated to C or ASM or both by the core libraries of mono (same with windows .NET), that is why is know too as interpreted language and that is why you need a runtime set of tools, cuz the runtime and jit are the one that generate on the fly the actual C/ASM code to talk to the cpu and that is why they depend on LLVM aka a real C/C++ compiler.

    on the interpreted languages is easier to debug the code because your never deal with the cpu but how the core deals with the intermediate representation you writed in C# to create the respetive call or link to the actual libraries that will talk to the cpu and since those libraries are quite standard after some time of debugging from the core project you can easily predict when the coder is screwing things up or even do some last second optimization at the expense of a much lower runtime performance compared to C/C++ that goes straight to the cpu

    Leave a comment:


  • smitty3268
    replied
    Originally posted by NoEffex View Post
    If there's a bug in the C++ compiler then it will crash.

    Basically, it's because there's more control. Simplicity = (Or has a lot to do with) Reliability 99999 times out of 100000
    LOL.

    I guess you compile all your C programs in debug mode to make them more reliable, then, huh? Because all those optimization passes complicate the code and compiler by about 1000 times more than a simple unoptimized compilation. You get more control as well, since the compiler isn't rewriting your code into a more optimized form, but just translating straight to assembly.

    Leave a comment:

Working...
X