Announcement

Collapse
No announcement yet.

D Language Still Showing Promise, Advancements

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    So how long before someone starts a reimplementation of the Linux kernel using D?

    Comment


    • #32
      Originally posted by varikonniemi View Post
      So how long before someone starts a reimplementation of the Linux kernel using D?
      So, when do you start?

      Comment


      • #33
        Originally posted by GreatEmerald View Post
        Have you watched the presentation I linked to? Because there it was suggested that it should be possible to define how the GC works, either by environment variables or compiler switches.
        He also missed the complexity of implementing a compiler. Adding GC makes the job that much more difficult for someone trying to independently implement 'D'. Dramatically more difficult.

        From my experience memory management on c++ has been a non issue for well over 10 years. The introduction of smart pointers and wider use of RAII seems to have pretty much killed this as an issue. And people still keep beating this drum.

        Comment


        • #34
          Originally posted by plonoma View Post
          Love the concept of ranges and slices in D.
          Although using a syntax with open interval for the upper limit comes over a little non-intuitive and strange to me.
          It is actually very intuitive when you begin to use it.

          Code:
          auto b = a[4..a.length]
          D also provides $ as the length inside a slice, so rewritten

          Code:
          auto b = a[4..$]
          If it had chosen to be inclusive your slice would go out of bounds and that just isn't intuitive when you read it.

          Comment


          • #35
            Originally posted by bnolsen View Post
            He also missed the complexity of implementing a compiler. Adding GC makes the job that much more difficult for someone trying to independently implement 'D'. Dramatically more difficult.

            From my experience memory management on c++ has been a non issue for well over 10 years. The introduction of smart pointers and wider use of RAII seems to have pretty much killed this as an issue. And people still keep beating this drum.
            People are "beating this drum" because it is true. Ref-counting has issues. Many issues. It is expensive (both memory wise and CPU) and can have hidden leaks.

            Another part which you seem to forget is: ref-counting works really great if you know your codebase, and it works even worse in multi-core when atomics and updating the reference counting can be really a burden. Yes, today a lot of software is multi-core, what a surprise. And when GC go faster as get more cores, the ref-counting goes slower, so the 10 years problem, didn't get up with times of the multicore.

            At last, there is a proof that C++ did not "solve" this issues for a long long time: there are tools of analyzing code and they find here and there a lot of memory errors (there are huge pieces of software which are full of accessing the address zero/NULL).
            Last edited by ciplogic; 06-20-2013, 02:29 PM.

            Comment


            • #36
              Most people who use [] are used to math notation you learn in school from the inclusive and exclusive range: ]1,2[ means range from one to two not including one and two.

              You say it's intuitive.
              Can you make the difference between intuitive: truly easy to use and familiar: easy because familiar.
              If you're used to C/C++, you're used to using length and stuff.

              If I have multiple slices, they can have overlapping numbers although they don't overlap:
              Code:
              auto piece1 = arrayB[0..5];
              auto piece2 = arrayB[5..8];
              auto piece3 = arrayB[8..12];
              (^^Looks kinda weird doesn't it, the slices don't overlap with the current definition but it does not look that way.)

              By glancing you would think that they overlap. This could lead to errors.
              This just does not look right.

              When beginning to use the inclusive form.
              Using something else.
              You could use an highestValidIndex:
              Code:
              auto b = a[4..a.highestValidIndex]
              Or something else that provides a length-1.
              Code:
              auto b = a[4..a.length-1]

              Provide D with $ as the length-1/number of elements - 1/ highest valid index inside a slice, so rewritten:
              Code:
              auto b = a[4..$]
              Looks the same

              It would be nice if the highest valid index would be the value as default.
              that:
              Code:
              auto b = a[4..]
              Means the same as:
              Code:
              auto b = a[4..a.highestValidIndex]
              Last edited by plonoma; 06-20-2013, 02:57 PM.

              Comment


              • #37
                Originally posted by ciplogic View Post
                People are "beating this drum" because it is true. Ref-counting has issues. Many issues. It is expensive (both memory wise and CPU) and can have hidden leaks.

                Another part which you seem to forget is: ref-counting works really great if you know your codebase, and it works even worse in multi-core when atomics and updating the reference counting can be really a burden. Yes, today a lot of software is multi-core, what a surprise. And when GC go faster as get more cores, the ref-counting goes slower, so the 10 years problem, didn't get up with times of the multicore.

                At last, there is a proof that C++ did not "solve" this issues for a long long time: there are tools of analyzing code and they find here and there a lot of memory errors (there are huge pieces of software which are full of accessing the address zero/NULL).
                Could have fooled me. I work on highly multithreaded c++ code for high throughput sensor processing (cameras, lidar, etc) which runs in windows and linux. We've only been able to test it with 32 cores on intel and 64 on amd though.

                Comment


                • #38
                  Originally posted by bnolsen View Post
                  Could have fooled me. I work on highly multithreaded c++ code for high throughput sensor processing (cameras, lidar, etc) which runs in windows and linux. We've only been able to test it with 32 cores on intel and 64 on amd though.
                  Eh, since in SMT especially the more active clock cycles you have, the less latency you get, it's quite possible you're hitting the sweet spot where the GC is processing the sensors data inefficiently thus raising the cycles causing the bus and ram latency to drop thus making the sensor input and even heap\stack memory operations go faster.
                  I've seen something similar happen once in database transactions where the power savings would cycle down the CPU poorly causing leaking code to run faster then good code since the RAM cycled up. But it's not limited to power management alone... As you can imagine debugging this was crazy but not nearly as crazy as demonstrating this in a minimal example to everyone.

                  Comment


                  • #39
                    Originally posted by plonoma View Post
                    Code:
                    auto piece1 = arrayB[0..5];
                    auto piece2 = arrayB[5..8];
                    auto piece3 = arrayB[8..12];
                    (^^Looks kinda weird doesn't it, the slices don't overlap with the current definition but it does not look that way.)
                    That isn't how you write code. But you're right, intuitive isn't really there.

                    Code:
                    auto add = countUntil(line[count..$], quote, sep, recordBreak);
                    count += add;
                    ans = line[0..count];
                    line = line[count..$];
                    I'd say neither would be intuitive. Some will expect one way, others would expect another. What I do know, it takes no time to learn and use it when you understand what open/close ended is.

                    Related: http://stackoverflow.com/questions/4...ot-include-end

                    Comment


                    • #40
                      Originally posted by c117152 View Post
                      Eh, since in SMT especially the more active clock cycles you have, the less latency you get, it's quite possible you're hitting the sweet spot where the GC is processing the sensors data inefficiently thus raising the cycles causing the bus and ram latency to drop thus making the sensor input and even heap\stack memory operations go faster.
                      I've seen something similar happen once in database transactions where the power savings would cycle down the CPU poorly causing leaking code to run faster then good code since the RAM cycled up. But it's not limited to power management alone... As you can imagine debugging this was crazy but not nearly as crazy as demonstrating this in a minimal example to everyone.
                      The software isn't real time its post process after the sensor data has been moved out of proprietary formats.

                      Only using c++ with thread safe smart pointers (haven't switched to the std::shared_ptr just yet). The code is pretty heavily stack based and avoids inheritance except for some different paged file formats. I couldn't imagine how badly a GC language could easily trash what I'm doing. But I'm usually not into jacking around with tuning stuff like this, mostly its implementing algorithms and equations and occasionally debugging stupid race conditions like accidentally passing a temporary in a message.

                      Compared to all our competitors this stuff scales and runs quite fast.

                      Comment


                      • #41
                        Originally posted by bnolsen View Post
                        The software isn't real time its post process after the sensor data has been moved out of proprietary formats.

                        Only using c++ with thread safe smart pointers (haven't switched to the std::shared_ptr just yet). The code is pretty heavily stack based and avoids inheritance except for some different paged file formats. I couldn't imagine how badly a GC language could easily trash what I'm doing. But I'm usually not into jacking around with tuning stuff like this, mostly its implementing algorithms and equations and occasionally debugging stupid race conditions like accidentally passing a temporary in a message.

                        Compared to all our competitors this stuff scales and runs quite fast.
                        Did I read it right? "The code is pretty heavily stack based and avoids inheritance except for some different paged file formats. " - so your smart pointers work for your internal code (so, it is really unlikely that an external component keeps a +1 reference to your data structure). Also, "heavily stack based", if my knowledge of C++ helps, it basically means that you don't use extensively the solution for your problem which you advertize, meaning smart-pointers (reference counting). You use it most likely between modules, not extensively (as a GC based implementation would do it, right?) I was working into a software which used extensively reference counting (using Boost's smart pointers) and this part could get in ranges of 10-15% of profiled time. Also, the leaks were mostly avoided by a huge review of the code, and many iterations and tests with huge data sets (so leaks even they occur, they would explode your application fast).

                        At last, did you see the C++ 11 talk by the C++ creator, Stroustup? (here: http://channel9.msdn.com/Events/Goin...-Design-for-C- )

                        He seemed to not advertize to use extensively the smart pointers, but just for cases of shared instances. So maybe there is a common wisdom in which C++ performance of reference counting should be avoided as much as possible by avoiding using smart pointers at least extensively.

                        Compare this with .Net/Java's GC at least (and to some extend to the D's one): you don't care this much of the GC, you don't get penalty performance as you iterate over a collection of references, if you don't do any allocation, you don't have to declare your references const & almost everywhere because there may be a performance issue.

                        So, as for your application: - how many algorithms you've been using do care about ref-counting? I wrote one small application which generates C++ code (you know, these code generators), and it was using smart-pointers, and the performance of that application, before I started to optimize the reference counting copying and initialization, the application was running roughly 10 times slower (so 9/10 of the time was basically update and decrement reference counting). Sure, this is the worst of the worst in the code generator (so I don't blame ref-counting in itself), but the same (unoptimized code), if the generated code was Java, maybe it would be 8 times faster (I never rewrote the code to export Java, but this is a fair estimate).

                        Comment


                        • #42
                          He seemed to not advertize to use extensively the smart pointers, but just for cases of shared instances. So maybe there is a common wisdom in which C++ performance of reference counting should be avoided as much as possible by avoiding using smart pointers at least extensively.

                          I prefer Herb Sutters approach
                          . Pervasively use smart pointers, and with the C++14 make_unique, *never* create an object with new, ever. unique_ptr is as efficient as a raw pointer for everything it does and has a few bytes memory overhead. And make everything unique (while preferring to pass ownership with std::move) until you need shared.

                          You should never even need a raw pointer (except to interact with C code). If you are in a situation where you want to pass a variable by reference, just use a reference rather than a raw pointer. You even have weak_ptr for temporary usage of a shared_ptr.

                          If you never have a new (or Xalloc), you can guarantee you never leak memory. The worst possible thing is that you have a reference variable you want to delete and render the reference null, you can still call reset() on the pointer with an empty argument or nulptr to delete the internal object.
                          Last edited by zanny; 06-21-2013, 09:07 AM.

                          Comment


                          • #43
                            Originally posted by he_the_great View Post
                            That isn't how you write code. But you're right, intuitive isn't really there.

                            Code:
                            auto add = countUntil(line[count..$], quote, sep, recordBreak);
                            count += add;
                            ans = line[0..count];
                            line = line[count..$];
                            I'd say neither would be intuitive. Some will expect one way, others would expect another. What I do know, it takes no time to learn and use it when you understand what open/close ended is.

                            Related: http://stackoverflow.com/questions/4...ot-include-end
                            Found a way to solve this reasonable easy.
                            It would be nice if you could use both somehow.
                            Say .. is kept and we introduce a .... for inclusive range and a rangein(a,b) function? (Three dots in D is already used for what in C++ the keyword params is used)
                            Would this work for you?

                            Code:
                            auto rangeExclu = a[1..11];
                            auto rangeInclu = a[1....10];
                            These two are the same range.


                            About understanding: Understanding this stuff does not cost any noticeable effort for me, got it almost instantly.

                            Making something that's easy to understand just isn't good enough for me.
                            Does it work intuitively is a much more interesting and is maybe the actual question we try to solve with higher programming languages.
                            How do we make a language that allows us to, as much as possible, to be constructed intuitively to use?
                            Not familiarity but easy for everything. Familiarity can be very deceiving and has lead many to creating contraptions/pieces of crap.
                            Also flexible. Very important points here are things like choice of type names.
                            Last edited by plonoma; 06-21-2013, 10:19 AM.

                            Comment


                            • #44
                              Using C++ for stack dominant code that avoids inheritance... Oh man You're practically not even using C++, just a tiny subset with an over sized compiler. No wonder you don't like the GC, you're a closet C programmer

                              But hey, you're probably right. Your use case is probably inappropriate for D. The thing is, it's equally wrong for C++. You're using a gigantic compiler with a mostly unused standard library and whole pile of missed optimization opportunities since it's feature focused like all C++ compilers are. Worse, there's that massive ugly runtime... And the stack dumps! Did you ever compare C++ dumps to C? H0ly sh1t...

                              Generally speaking about garbage collectors, D is used in game engines and golang is used in Google's and Mozilla's services so GCs are not a problem for most use cases. Just use the right tools for the job.

                              tl;dr no Lisp machines for you!

                              Comment


                              • #45
                                Originally posted by c117152 View Post
                                Using C++ for stack dominant code that avoids inheritance... Oh man You're practically not even using C++, just a tiny subset with an over sized compiler. No wonder you don't like the GC, you're a closet C programmer

                                But hey, you're probably right. Your use case is probably inappropriate for D. The thing is, it's equally wrong for C++. You're using a gigantic compiler with a mostly unused standard library and whole pile of missed optimization opportunities since it's feature focused like all C++ compilers are. Worse, there's that massive ugly runtime... And the stack dumps! Did you ever compare C++ dumps to C? H0ly sh1t...

                                Generally speaking about garbage collectors, D is used in game engines and golang is used in Google's and Mozilla's services so GCs are not a problem for most use cases. Just use the right tools for the job.

                                tl;dr no Lisp machines for you!
                                OO (without inheritance) templates, exceptions, the stl, namespacing, lambdas, and references are all wonderful C++ features that aren't polymorphism or new / delete / smart pointers.

                                Comment

                                Working...
                                X