Announcement

Collapse
No announcement yet.

D Language Still Showing Promise, Advancements

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • he_the_great
    replied
    Originally posted by plonoma View Post
    Code:
    auto piece1 = arrayB[0..5];
    auto piece2 = arrayB[5..8];
    auto piece3 = arrayB[8..12];
    (^^Looks kinda weird doesn't it, the slices don't overlap with the current definition but it does not look that way.)
    That isn't how you write code. But you're right, intuitive isn't really there.

    Code:
    auto add = countUntil(line[count..$], quote, sep, recordBreak);
    count += add;
    ans = line[0..count];
    line = line[count..$];
    I'd say neither would be intuitive. Some will expect one way, others would expect another. What I do know, it takes no time to learn and use it when you understand what open/close ended is.

    Related: http://stackoverflow.com/questions/4...ot-include-end

    Leave a comment:


  • c117152
    replied
    Originally posted by bnolsen View Post
    Could have fooled me. I work on highly multithreaded c++ code for high throughput sensor processing (cameras, lidar, etc) which runs in windows and linux. We've only been able to test it with 32 cores on intel and 64 on amd though.
    Eh, since in SMT especially the more active clock cycles you have, the less latency you get, it's quite possible you're hitting the sweet spot where the GC is processing the sensors data inefficiently thus raising the cycles causing the bus and ram latency to drop thus making the sensor input and even heap\stack memory operations go faster.
    I've seen something similar happen once in database transactions where the power savings would cycle down the CPU poorly causing leaking code to run faster then good code since the RAM cycled up. But it's not limited to power management alone... As you can imagine debugging this was crazy but not nearly as crazy as demonstrating this in a minimal example to everyone.

    Leave a comment:


  • bnolsen
    replied
    Originally posted by ciplogic View Post
    People are "beating this drum" because it is true. Ref-counting has issues. Many issues. It is expensive (both memory wise and CPU) and can have hidden leaks.

    Another part which you seem to forget is: ref-counting works really great if you know your codebase, and it works even worse in multi-core when atomics and updating the reference counting can be really a burden. Yes, today a lot of software is multi-core, what a surprise. And when GC go faster as get more cores, the ref-counting goes slower, so the 10 years problem, didn't get up with times of the multicore.

    At last, there is a proof that C++ did not "solve" this issues for a long long time: there are tools of analyzing code and they find here and there a lot of memory errors (there are huge pieces of software which are full of accessing the address zero/NULL).
    Could have fooled me. I work on highly multithreaded c++ code for high throughput sensor processing (cameras, lidar, etc) which runs in windows and linux. We've only been able to test it with 32 cores on intel and 64 on amd though.

    Leave a comment:


  • plonoma
    replied
    Most people who use [] are used to math notation you learn in school from the inclusive and exclusive range: ]1,2[ means range from one to two not including one and two.

    You say it's intuitive.
    Can you make the difference between intuitive: truly easy to use and familiar: easy because familiar.
    If you're used to C/C++, you're used to using length and stuff.

    If I have multiple slices, they can have overlapping numbers although they don't overlap:
    Code:
    auto piece1 = arrayB[0..5];
    auto piece2 = arrayB[5..8];
    auto piece3 = arrayB[8..12];
    (^^Looks kinda weird doesn't it, the slices don't overlap with the current definition but it does not look that way.)

    By glancing you would think that they overlap. This could lead to errors.
    This just does not look right.

    When beginning to use the inclusive form.
    Using something else.
    You could use an highestValidIndex:
    Code:
    auto b = a[4..a.highestValidIndex]
    Or something else that provides a length-1.
    Code:
    auto b = a[4..a.length-1]

    Provide D with $ as the length-1/number of elements - 1/ highest valid index inside a slice, so rewritten:
    Code:
    auto b = a[4..$]
    Looks the same

    It would be nice if the highest valid index would be the value as default.
    that:
    Code:
    auto b = a[4..]
    Means the same as:
    Code:
    auto b = a[4..a.highestValidIndex]
    Last edited by plonoma; 20 June 2013, 02:57 PM.

    Leave a comment:


  • ciplogic
    replied
    Originally posted by bnolsen View Post
    He also missed the complexity of implementing a compiler. Adding GC makes the job that much more difficult for someone trying to independently implement 'D'. Dramatically more difficult.

    From my experience memory management on c++ has been a non issue for well over 10 years. The introduction of smart pointers and wider use of RAII seems to have pretty much killed this as an issue. And people still keep beating this drum.
    People are "beating this drum" because it is true. Ref-counting has issues. Many issues. It is expensive (both memory wise and CPU) and can have hidden leaks.

    Another part which you seem to forget is: ref-counting works really great if you know your codebase, and it works even worse in multi-core when atomics and updating the reference counting can be really a burden. Yes, today a lot of software is multi-core, what a surprise. And when GC go faster as get more cores, the ref-counting goes slower, so the 10 years problem, didn't get up with times of the multicore.

    At last, there is a proof that C++ did not "solve" this issues for a long long time: there are tools of analyzing code and they find here and there a lot of memory errors (there are huge pieces of software which are full of accessing the address zero/NULL).
    Last edited by ciplogic; 20 June 2013, 02:29 PM.

    Leave a comment:


  • he_the_great
    replied
    Originally posted by plonoma View Post
    Love the concept of ranges and slices in D.
    Although using a syntax with open interval for the upper limit comes over a little non-intuitive and strange to me.
    It is actually very intuitive when you begin to use it.

    Code:
    auto b = a[4..a.length]
    D also provides $ as the length inside a slice, so rewritten

    Code:
    auto b = a[4..$]
    If it had chosen to be inclusive your slice would go out of bounds and that just isn't intuitive when you read it.

    Leave a comment:


  • bnolsen
    replied
    Originally posted by GreatEmerald View Post
    Have you watched the presentation I linked to? Because there it was suggested that it should be possible to define how the GC works, either by environment variables or compiler switches.
    He also missed the complexity of implementing a compiler. Adding GC makes the job that much more difficult for someone trying to independently implement 'D'. Dramatically more difficult.

    From my experience memory management on c++ has been a non issue for well over 10 years. The introduction of smart pointers and wider use of RAII seems to have pretty much killed this as an issue. And people still keep beating this drum.

    Leave a comment:


  • ciplogic
    replied
    Originally posted by varikonniemi View Post
    So how long before someone starts a reimplementation of the Linux kernel using D?
    So, when do you start?

    Leave a comment:


  • varikonniemi
    replied
    So how long before someone starts a reimplementation of the Linux kernel using D?

    Leave a comment:


  • GreatEmerald
    replied
    Originally posted by elanthis View Post
    Another problem with GC on that front is that it enforces a singular memory allocator. If you want really fine-grained control of fragmentation, allocation patterns, etc., too damn bad. D and C# make it easier to use things like pools and contiguous memory than something like Java or Python, but they still fall short of C++.
    Have you watched the presentation I linked to? Because there it was suggested that it should be possible to define how the GC works, either by environment variables or compiler switches.

    Leave a comment:

Working...
X