Originally posted by BlackStar
View Post
Announcement
Collapse
No announcement yet.
Mono 2.8 Is Out With C# 4.0, Better Performance
Collapse
X
-
-
Originally posted by BlackStar View PostDamn, it's so extremely productive that one developer can trivially implement that which took Gimp around a decade to fix: dockable panels.
Leave a comment:
-
Fake edit: single-window was initially commited to Gimp GIT in October 2009 (1 year after v2.6). It was first released to the public in v2.7.1 (2 years after v2.6).
Leave a comment:
-
Originally posted by Apopas View PostWhen Gimp was around many of the current technologies was not in existance at plus that's something which can be done in QT/KDE easily... Your point?
1. Gimp uses GTK+. Pinta uses GTK#.
2. Gimp will introduce dockable panels in 2.8, 13 years after its introduction and after huge pressure from the community. The implementation took 2 years (v2.6 was released without dockable panels in 2008, v2.7.1 was released with dockable panels in 2010).
3. Pinta introduced dockable panels in v0.4, 2 months after its introduction and 1 month after v0.3.
Two conclusions:
a. Gimp's developers were initially opposed to dockable panels.
b. Once they decided to implement them, they needed 2 years. Pinta needed 1 month (v0.3 vs v0.4).
In short, some UI features seem to be significantly easier to implement in Mono/GTK# compared to C/GTK+. Ergo, the former seems to be a more productive environment for UIs.
Leave a comment:
-
Originally posted by BlackStar View PostDamn, it's so extremely productive that one developer can trivially implement that which took Gimp around a decade to fix: dockable panels.
Leave a comment:
-
and at the end of the day C and C++ are old enough to avoid surprises
Or take a look at the C++ portability guide for Mozilla: "don't use static constructors", "don't use exceptions", "don't use RTTI", "be careful of variable declarations that require construction or initialization" - just read for yourself, there's some golden stuff. They forbid half the language in there, for god's shake!
Is this how mature languages work? Because if so, just give me immature C# which I can compile with any compiler and have that single binary run on every supported OS *and* CPU architecture, without recompilation or strange compatibility issues. It just works (and I have done this time and time before). And if I need the extra speed, I'll inject native code into the hotspots and precompile the rest at installation time.
It's amazing what a non-retarded language spec can do for you.
PS: how's your C++ ABI feeling today?
Leave a comment:
-
Originally posted by smitty3268 View PostI know exactly how compilers work, I created a toy one once for a college class. Technically, the optimization passes generally operate on IR (usually a tree-structure) before the assembly is output at the end of the process.
You are correct that i should have said "non-optimized" (-O0) rather than "debug" (-g). Other than that, i think we agree? Or else i'm missing the point of your comment.
Every optimization pass is extra complexity in the compiler for it to possibly screw up, and less control in the hands of the original coder. This is obviously a very good thing, but exactly the opposite of what the original poster claimed would help C be more reliable than C++
and at the end of the day C and C++ are old enough to avoid surprises , so once you code properly the binary quality is almost or the same these days, ofc unless you use a new flag that isnt old enough or tested enough to be safe but you can't blame the compiler then XD, so yes you are right is a dev problem not a compiler one these days
well in the old days you could put gcc on it kness due some bugs, but since 4 series is really hard to find a bug on the compiler bad enough to produce garbage in the binary assuming the code is good writed ofc
so sorry i did misundertand your comment XD my bad
Leave a comment:
-
Originally posted by jrch2k8 View Postwell i don't know how far you understand how compilation work in low level languages aka compilable languages but doesn't matter what option you pass to the compiler you use, errrm it always write assembly cuz the processor can't understand anything else <.< aka assembly is closest thing you will ever get to talk to the cpu keeping some sort of human readibility.
let me explain it a bit, when you compile and begin to pass option to the compiler, it just like wrappers, aka automaton optimization that the compiler perform over the assembly output to improve certain aspect of the code like performance, memory aligment, plataform compatibility, compilation speed, etc so -g or debug just inform the compiler to avoid certain optimization and adjustments to the resulting assembly code to allow you to get better info from a debugger or a profiler or any other tool you want to use to perform some kind of check on the resulting code.
so for example take the option -funroll-loops, this won't generate an alien type of super code or anything, you just inform you compiler to expand your loops (for/while/foreach,etc) so you avoid to have to do it manually in C/C++ code making more readeable and gain some performance due the loss of a freaking ton of assembly code required to make a loop work.
mono does this exactly too but on the core libraries, aka your C# code doesnt talk to the cpu directly like C/C++/D/pascal/cobol/etc but instead is an intermediate representation that are translated to C or ASM or both by the core libraries of mono (same with windows .NET), that is why is know too as interpreted language and that is why you need a runtime set of tools, cuz the runtime and jit are the one that generate on the fly the actual C/ASM code to talk to the cpu and that is why they depend on LLVM aka a real C/C++ compiler.
on the interpreted languages is easier to debug the code because your never deal with the cpu but how the core deals with the intermediate representation you writed in C# to create the respetive call or link to the actual libraries that will talk to the cpu and since those libraries are quite standard after some time of debugging from the core project you can easily predict when the coder is screwing things up or even do some last second optimization at the expense of a much lower runtime performance compared to C/C++ that goes straight to the cpu
You are correct that i should have said "non-optimized" (-O0) rather than "debug" (-g). Other than that, i think we agree? Or else i'm missing the point of your comment.
Every optimization pass is extra complexity in the compiler for it to possibly screw up, and less control in the hands of the original coder. This is obviously a very good thing, but exactly the opposite of what the original poster claimed would help C be more reliable than C++
Leave a comment:
-
Originally posted by smitty3268 View PostLOL.
I guess you compile all your C programs in debug mode to make them more reliable, then, huh? Because all those optimization passes complicate the code and compiler by about 1000 times more than a simple unoptimized compilation. You get more control as well, since the compiler isn't rewriting your code into a more optimized form, but just translating straight to assembly.
let me explain it a bit, when you compile and begin to pass option to the compiler, it just like wrappers, aka automaton optimization that the compiler perform over the assembly output to improve certain aspect of the code like performance, memory aligment, plataform compatibility, compilation speed, etc so -g or debug just inform the compiler to avoid certain optimization and adjustments to the resulting assembly code to allow you to get better info from a debugger or a profiler or any other tool you want to use to perform some kind of check on the resulting code.
so for example take the option -funroll-loops, this won't generate an alien type of super code or anything, you just inform you compiler to expand your loops (for/while/foreach,etc) so you avoid to have to do it manually in C/C++ code making more readeable and gain some performance due the loss of a freaking ton of assembly code required to make a loop work.
mono does this exactly too but on the core libraries, aka your C# code doesnt talk to the cpu directly like C/C++/D/pascal/cobol/etc but instead is an intermediate representation that are translated to C or ASM or both by the core libraries of mono (same with windows .NET), that is why is know too as interpreted language and that is why you need a runtime set of tools, cuz the runtime and jit are the one that generate on the fly the actual C/ASM code to talk to the cpu and that is why they depend on LLVM aka a real C/C++ compiler.
on the interpreted languages is easier to debug the code because your never deal with the cpu but how the core deals with the intermediate representation you writed in C# to create the respetive call or link to the actual libraries that will talk to the cpu and since those libraries are quite standard after some time of debugging from the core project you can easily predict when the coder is screwing things up or even do some last second optimization at the expense of a much lower runtime performance compared to C/C++ that goes straight to the cpu
Leave a comment:
-
Originally posted by NoEffex View PostIf there's a bug in the C++ compiler then it will crash.
Basically, it's because there's more control. Simplicity = (Or has a lot to do with) Reliability 99999 times out of 100000
I guess you compile all your C programs in debug mode to make them more reliable, then, huh? Because all those optimization passes complicate the code and compiler by about 1000 times more than a simple unoptimized compilation. You get more control as well, since the compiler isn't rewriting your code into a more optimized form, but just translating straight to assembly.
Leave a comment:
Leave a comment: