Announcement

Collapse
No announcement yet.

D Language Still Showing Promise, Advancements

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • mrugiero
    replied
    Originally posted by plonoma View Post
    The hating thing was mentioned on the stackoverflow question about the exclusive range.
    In bigger statements, being able to have one less () pair can make a good difference in readability.
    Lots of round brackets can make things more difficult to read.
    I'm aware, but since I don't hate any of the ways to put it, >= fits me well and lets me take this brackets out too
    Of course, if someone hates it, it's good that they have a solution out there.

    Leave a comment:


  • plonoma
    replied
    The hating thing was mentioned on the stackoverflow question about the exclusive range.
    In bigger statements, being able to have one less () pair can make a good difference in readability.
    Lots of round brackets can make things more difficult to read.
    Last edited by plonoma; 21 June 2013, 06:37 PM.

    Leave a comment:


  • mrugiero
    replied
    Originally posted by plonoma View Post
    In addition keeping the two as two seperate types would allow the most optimization in algorithms used by the compilers.
    (Also in one of the reasons programmers hating to use <=, instead of using <. Instead of having to use <=, programmers could use !>)
    (Wonderful operators, the !> and !< ones.)
    You could as well do !(x>y). Personally, I don't hate neither of those.

    Leave a comment:


  • plonoma
    replied
    Say .. is kept and we introduce a .... for inclusive range and a rangein(a,b) function? (Three dots in D is already used for what in C++ the keyword params is used)
    Would this work for you?

    Code:
    Code:
    auto rangeExclu = a[1..11];
    auto rangeInclu = a[1....10];
    These two are the same range.
    In addition keeping the two as two seperate types would allow the most optimization in algorithms used by the compilers.
    (Also in one of the reasons programmers hating to use <=, instead of using <. Instead of having to use <=, programmers could use !>)
    (Wonderful operators, the !> and !< ones.)

    Leave a comment:


  • zanny
    replied
    Originally posted by c117152 View Post
    Using C++ for stack dominant code that avoids inheritance... Oh man You're practically not even using C++, just a tiny subset with an over sized compiler. No wonder you don't like the GC, you're a closet C programmer

    But hey, you're probably right. Your use case is probably inappropriate for D. The thing is, it's equally wrong for C++. You're using a gigantic compiler with a mostly unused standard library and whole pile of missed optimization opportunities since it's feature focused like all C++ compilers are. Worse, there's that massive ugly runtime... And the stack dumps! Did you ever compare C++ dumps to C? H0ly sh1t...

    Generally speaking about garbage collectors, D is used in game engines and golang is used in Google's and Mozilla's services so GCs are not a problem for most use cases. Just use the right tools for the job.

    tl;dr no Lisp machines for you!
    OO (without inheritance) templates, exceptions, the stl, namespacing, lambdas, and references are all wonderful C++ features that aren't polymorphism or new / delete / smart pointers.

    Leave a comment:


  • c117152
    replied
    Using C++ for stack dominant code that avoids inheritance... Oh man You're practically not even using C++, just a tiny subset with an over sized compiler. No wonder you don't like the GC, you're a closet C programmer

    But hey, you're probably right. Your use case is probably inappropriate for D. The thing is, it's equally wrong for C++. You're using a gigantic compiler with a mostly unused standard library and whole pile of missed optimization opportunities since it's feature focused like all C++ compilers are. Worse, there's that massive ugly runtime... And the stack dumps! Did you ever compare C++ dumps to C? H0ly sh1t...

    Generally speaking about garbage collectors, D is used in game engines and golang is used in Google's and Mozilla's services so GCs are not a problem for most use cases. Just use the right tools for the job.

    tl;dr no Lisp machines for you!

    Leave a comment:


  • plonoma
    replied
    Originally posted by he_the_great View Post
    That isn't how you write code. But you're right, intuitive isn't really there.

    Code:
    auto add = countUntil(line[count..$], quote, sep, recordBreak);
    count += add;
    ans = line[0..count];
    line = line[count..$];
    I'd say neither would be intuitive. Some will expect one way, others would expect another. What I do know, it takes no time to learn and use it when you understand what open/close ended is.

    Related: http://stackoverflow.com/questions/4...ot-include-end
    Found a way to solve this reasonable easy.
    It would be nice if you could use both somehow.
    Say .. is kept and we introduce a .... for inclusive range and a rangein(a,b) function? (Three dots in D is already used for what in C++ the keyword params is used)
    Would this work for you?

    Code:
    auto rangeExclu = a[1..11];
    auto rangeInclu = a[1....10];
    These two are the same range.


    About understanding: Understanding this stuff does not cost any noticeable effort for me, got it almost instantly.

    Making something that's easy to understand just isn't good enough for me.
    Does it work intuitively is a much more interesting and is maybe the actual question we try to solve with higher programming languages.
    How do we make a language that allows us to, as much as possible, to be constructed intuitively to use?
    Not familiarity but easy for everything. Familiarity can be very deceiving and has lead many to creating contraptions/pieces of crap.
    Also flexible. Very important points here are things like choice of type names.
    Last edited by plonoma; 21 June 2013, 10:19 AM.

    Leave a comment:


  • zanny
    replied
    He seemed to not advertize to use extensively the smart pointers, but just for cases of shared instances. So maybe there is a common wisdom in which C++ performance of reference counting should be avoided as much as possible by avoiding using smart pointers at least extensively.

    I prefer Herb Sutters approach
    . Pervasively use smart pointers, and with the C++14 make_unique, *never* create an object with new, ever. unique_ptr is as efficient as a raw pointer for everything it does and has a few bytes memory overhead. And make everything unique (while preferring to pass ownership with std::move) until you need shared.

    You should never even need a raw pointer (except to interact with C code). If you are in a situation where you want to pass a variable by reference, just use a reference rather than a raw pointer. You even have weak_ptr for temporary usage of a shared_ptr.

    If you never have a new (or Xalloc), you can guarantee you never leak memory. The worst possible thing is that you have a reference variable you want to delete and render the reference null, you can still call reset() on the pointer with an empty argument or nulptr to delete the internal object.
    Last edited by zanny; 21 June 2013, 09:07 AM.

    Leave a comment:


  • ciplogic
    replied
    Originally posted by bnolsen View Post
    The software isn't real time its post process after the sensor data has been moved out of proprietary formats.

    Only using c++ with thread safe smart pointers (haven't switched to the std::shared_ptr just yet). The code is pretty heavily stack based and avoids inheritance except for some different paged file formats. I couldn't imagine how badly a GC language could easily trash what I'm doing. But I'm usually not into jacking around with tuning stuff like this, mostly its implementing algorithms and equations and occasionally debugging stupid race conditions like accidentally passing a temporary in a message.

    Compared to all our competitors this stuff scales and runs quite fast.
    Did I read it right? "The code is pretty heavily stack based and avoids inheritance except for some different paged file formats. " - so your smart pointers work for your internal code (so, it is really unlikely that an external component keeps a +1 reference to your data structure). Also, "heavily stack based", if my knowledge of C++ helps, it basically means that you don't use extensively the solution for your problem which you advertize, meaning smart-pointers (reference counting). You use it most likely between modules, not extensively (as a GC based implementation would do it, right?) I was working into a software which used extensively reference counting (using Boost's smart pointers) and this part could get in ranges of 10-15% of profiled time. Also, the leaks were mostly avoided by a huge review of the code, and many iterations and tests with huge data sets (so leaks even they occur, they would explode your application fast).

    At last, did you see the C++ 11 talk by the C++ creator, Stroustup? (here: http://channel9.msdn.com/Events/Goin...-Design-for-C- )

    He seemed to not advertize to use extensively the smart pointers, but just for cases of shared instances. So maybe there is a common wisdom in which C++ performance of reference counting should be avoided as much as possible by avoiding using smart pointers at least extensively.

    Compare this with .Net/Java's GC at least (and to some extend to the D's one): you don't care this much of the GC, you don't get penalty performance as you iterate over a collection of references, if you don't do any allocation, you don't have to declare your references const & almost everywhere because there may be a performance issue.

    So, as for your application: - how many algorithms you've been using do care about ref-counting? I wrote one small application which generates C++ code (you know, these code generators), and it was using smart-pointers, and the performance of that application, before I started to optimize the reference counting copying and initialization, the application was running roughly 10 times slower (so 9/10 of the time was basically update and decrement reference counting). Sure, this is the worst of the worst in the code generator (so I don't blame ref-counting in itself), but the same (unoptimized code), if the generated code was Java, maybe it would be 8 times faster (I never rewrote the code to export Java, but this is a fair estimate).

    Leave a comment:


  • bnolsen
    replied
    Originally posted by c117152 View Post
    Eh, since in SMT especially the more active clock cycles you have, the less latency you get, it's quite possible you're hitting the sweet spot where the GC is processing the sensors data inefficiently thus raising the cycles causing the bus and ram latency to drop thus making the sensor input and even heap\stack memory operations go faster.
    I've seen something similar happen once in database transactions where the power savings would cycle down the CPU poorly causing leaking code to run faster then good code since the RAM cycled up. But it's not limited to power management alone... As you can imagine debugging this was crazy but not nearly as crazy as demonstrating this in a minimal example to everyone.
    The software isn't real time its post process after the sensor data has been moved out of proprietary formats.

    Only using c++ with thread safe smart pointers (haven't switched to the std::shared_ptr just yet). The code is pretty heavily stack based and avoids inheritance except for some different paged file formats. I couldn't imagine how badly a GC language could easily trash what I'm doing. But I'm usually not into jacking around with tuning stuff like this, mostly its implementing algorithms and equations and occasionally debugging stupid race conditions like accidentally passing a temporary in a message.

    Compared to all our competitors this stuff scales and runs quite fast.

    Leave a comment:

Working...
X