Announcement

Collapse
No announcement yet.

D Language Still Showing Promise, Advancements

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    So how long before someone starts a reimplementation of the Linux kernel using D?

    Comment


    • #32
      Originally posted by varikonniemi View Post
      So how long before someone starts a reimplementation of the Linux kernel using D?
      So, when do you start?

      Comment


      • #33
        Originally posted by GreatEmerald View Post
        Have you watched the presentation I linked to? Because there it was suggested that it should be possible to define how the GC works, either by environment variables or compiler switches.
        He also missed the complexity of implementing a compiler. Adding GC makes the job that much more difficult for someone trying to independently implement 'D'. Dramatically more difficult.

        From my experience memory management on c++ has been a non issue for well over 10 years. The introduction of smart pointers and wider use of RAII seems to have pretty much killed this as an issue. And people still keep beating this drum.

        Comment


        • #34
          Originally posted by plonoma View Post
          Love the concept of ranges and slices in D.
          Although using a syntax with open interval for the upper limit comes over a little non-intuitive and strange to me.
          It is actually very intuitive when you begin to use it.

          Code:
          auto b = a[4..a.length]
          D also provides $ as the length inside a slice, so rewritten

          Code:
          auto b = a[4..$]
          If it had chosen to be inclusive your slice would go out of bounds and that just isn't intuitive when you read it.

          Comment


          • #35
            Originally posted by bnolsen View Post
            He also missed the complexity of implementing a compiler. Adding GC makes the job that much more difficult for someone trying to independently implement 'D'. Dramatically more difficult.

            From my experience memory management on c++ has been a non issue for well over 10 years. The introduction of smart pointers and wider use of RAII seems to have pretty much killed this as an issue. And people still keep beating this drum.
            People are "beating this drum" because it is true. Ref-counting has issues. Many issues. It is expensive (both memory wise and CPU) and can have hidden leaks.

            Another part which you seem to forget is: ref-counting works really great if you know your codebase, and it works even worse in multi-core when atomics and updating the reference counting can be really a burden. Yes, today a lot of software is multi-core, what a surprise. And when GC go faster as get more cores, the ref-counting goes slower, so the 10 years problem, didn't get up with times of the multicore.

            At last, there is a proof that C++ did not "solve" this issues for a long long time: there are tools of analyzing code and they find here and there a lot of memory errors (there are huge pieces of software which are full of accessing the address zero/NULL).
            Last edited by ciplogic; 20 June 2013, 02:29 PM.

            Comment


            • #36
              Most people who use [] are used to math notation you learn in school from the inclusive and exclusive range: ]1,2[ means range from one to two not including one and two.

              You say it's intuitive.
              Can you make the difference between intuitive: truly easy to use and familiar: easy because familiar.
              If you're used to C/C++, you're used to using length and stuff.

              If I have multiple slices, they can have overlapping numbers although they don't overlap:
              Code:
              auto piece1 = arrayB[0..5];
              auto piece2 = arrayB[5..8];
              auto piece3 = arrayB[8..12];
              (^^Looks kinda weird doesn't it, the slices don't overlap with the current definition but it does not look that way.)

              By glancing you would think that they overlap. This could lead to errors.
              This just does not look right.

              When beginning to use the inclusive form.
              Using something else.
              You could use an highestValidIndex:
              Code:
              auto b = a[4..a.highestValidIndex]
              Or something else that provides a length-1.
              Code:
              auto b = a[4..a.length-1]

              Provide D with $ as the length-1/number of elements - 1/ highest valid index inside a slice, so rewritten:
              Code:
              auto b = a[4..$]
              Looks the same

              It would be nice if the highest valid index would be the value as default.
              that:
              Code:
              auto b = a[4..]
              Means the same as:
              Code:
              auto b = a[4..a.highestValidIndex]
              Last edited by plonoma; 20 June 2013, 02:57 PM.

              Comment


              • #37
                Originally posted by ciplogic View Post
                People are "beating this drum" because it is true. Ref-counting has issues. Many issues. It is expensive (both memory wise and CPU) and can have hidden leaks.

                Another part which you seem to forget is: ref-counting works really great if you know your codebase, and it works even worse in multi-core when atomics and updating the reference counting can be really a burden. Yes, today a lot of software is multi-core, what a surprise. And when GC go faster as get more cores, the ref-counting goes slower, so the 10 years problem, didn't get up with times of the multicore.

                At last, there is a proof that C++ did not "solve" this issues for a long long time: there are tools of analyzing code and they find here and there a lot of memory errors (there are huge pieces of software which are full of accessing the address zero/NULL).
                Could have fooled me. I work on highly multithreaded c++ code for high throughput sensor processing (cameras, lidar, etc) which runs in windows and linux. We've only been able to test it with 32 cores on intel and 64 on amd though.

                Comment


                • #38
                  Originally posted by bnolsen View Post
                  Could have fooled me. I work on highly multithreaded c++ code for high throughput sensor processing (cameras, lidar, etc) which runs in windows and linux. We've only been able to test it with 32 cores on intel and 64 on amd though.
                  Eh, since in SMT especially the more active clock cycles you have, the less latency you get, it's quite possible you're hitting the sweet spot where the GC is processing the sensors data inefficiently thus raising the cycles causing the bus and ram latency to drop thus making the sensor input and even heap\stack memory operations go faster.
                  I've seen something similar happen once in database transactions where the power savings would cycle down the CPU poorly causing leaking code to run faster then good code since the RAM cycled up. But it's not limited to power management alone... As you can imagine debugging this was crazy but not nearly as crazy as demonstrating this in a minimal example to everyone.

                  Comment


                  • #39
                    Originally posted by plonoma View Post
                    Code:
                    auto piece1 = arrayB[0..5];
                    auto piece2 = arrayB[5..8];
                    auto piece3 = arrayB[8..12];
                    (^^Looks kinda weird doesn't it, the slices don't overlap with the current definition but it does not look that way.)
                    That isn't how you write code. But you're right, intuitive isn't really there.

                    Code:
                    auto add = countUntil(line[count..$], quote, sep, recordBreak);
                    count += add;
                    ans = line[0..count];
                    line = line[count..$];
                    I'd say neither would be intuitive. Some will expect one way, others would expect another. What I do know, it takes no time to learn and use it when you understand what open/close ended is.

                    Related: http://stackoverflow.com/questions/4...ot-include-end

                    Comment


                    • #40
                      Originally posted by c117152 View Post
                      Eh, since in SMT especially the more active clock cycles you have, the less latency you get, it's quite possible you're hitting the sweet spot where the GC is processing the sensors data inefficiently thus raising the cycles causing the bus and ram latency to drop thus making the sensor input and even heap\stack memory operations go faster.
                      I've seen something similar happen once in database transactions where the power savings would cycle down the CPU poorly causing leaking code to run faster then good code since the RAM cycled up. But it's not limited to power management alone... As you can imagine debugging this was crazy but not nearly as crazy as demonstrating this in a minimal example to everyone.
                      The software isn't real time its post process after the sensor data has been moved out of proprietary formats.

                      Only using c++ with thread safe smart pointers (haven't switched to the std::shared_ptr just yet). The code is pretty heavily stack based and avoids inheritance except for some different paged file formats. I couldn't imagine how badly a GC language could easily trash what I'm doing. But I'm usually not into jacking around with tuning stuff like this, mostly its implementing algorithms and equations and occasionally debugging stupid race conditions like accidentally passing a temporary in a message.

                      Compared to all our competitors this stuff scales and runs quite fast.

                      Comment

                      Working...
                      X