Announcement

Collapse
No announcement yet.

D Language Still Showing Promise, Advancements

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    c++ libs

    Originally posted by stqn View Post
    D is more an alternative to C++ than to C, due to its high complexity and similar features.
    The strenth of C++ is the number of libraries (boost - for me in particular serialization & python), and I doubt it can be ever matched by D as long as its user base is smallish. C++ is also evolving, and if they add the "static if" thing to templates (as it is in D), it will be able to drop a lot of its template complexity.

    Comment


    • #22
      Originally posted by Ericg View Post
      Automated Reference Counting is a nice middle ground. As long as the object has 1 reference to it, it wont be freed which prevents 'accidental frees' that can happen from human-error in manual memory management, but when it hits 0 references it gets automatically freed.
      actually i'm using something like this ni many situations, already since more than 10 years.

      Comment


      • #23
        The ignorance of C/ObjC in LLVM/Clang and Cocoa on this board continues to shine on.

        Comment


        • #24
          Originally posted by ciplogic View Post
          Yes and no: ARC increases the size of the object by adding the reference data into it. I'm (almost) sure that ARC may work great for this guy, as I suppose that the bias against the GC is founded on rumor and "I think", and "these big pauses". D has not a generational GC (because in generationa GCs the freeing of most short lived objects automatically).

          Also, ARC is expensive in CPU power, if you have a loop, and in this loop you have a reference increasing and decreasing by one, the same loop can work slower with reference counting than with a proper GC.

          At last, but not at least, ARC gives more predictability about freeing of objects (given you don't make reference cycles, meaning leaks), so many pauses are by design very small. This I think is the main advantage of ARC, and to not use a GC. If the user's application creates many long lived objects, the GC pauses can be really big.

          But memory constraints? Not sure.
          I was pretty close to doing summer of code in 2010 rewriting the D garbage collector, and the principle reason D doesn't use generational GC (and implementing better GC in D is so hard) is because the language still supports raw pointers and nondeterministic invalidating of pointer references by using a moving GC was a big no-no. One consideration back then was to make it a compiler error to use raw pointers without disabling the gc if we wanted to use a moving collector.

          Currently the mark and sweep collector already uses 4 bits per allocation for various housekeeping, and the non-moving generational collector requires tracking which objects are in which generation because you can't move them.

          In practical use cases, it is very rare you want an object with shared ownership anyway. Usually you can just use a unique pointer wrapper that does object destruction on parent destruction and you want to pass that ownership out of some destructing context (which is the principle reason you do heap allocation in the first place, that and dynamic resizing).

          Comment


          • #25
            Originally posted by zanny View Post
            In practical use cases, it is very rare you want an object with shared ownership anyway.
            Well, that might be the case for system programming or whatever.

            I work in numerics in multibody simulations, and shared_ptr is heaven's blessing: objects get attached to other objects, to contacts, move around, get deleted, etc. And there is python scripting, too, and shared_ptr integrates seamlessly with python's reference counting (via boost:ython).

            Comment


            • #26
              Originally posted by a user View Post
              The GC is in every case you mention for me (us) the main point why D is no go. we have a quite constraint memory demand which makes any currently known attempt of GC a deal breaker.

              i can't go into details, but in contrary to what somebody else posted here, GC is worse than manual memory management. of course you do not compare a bad implementation of memory management with a good GC. you compare a good memory management with a good GC. the lack of control yields a damn lot of situations where a memory limit will cause big issues with any kind of GC.

              GC are in general only for usecase where you actually have either a lot of memory or time enough.... or your customers/clients can live with the drawbacks.
              I find GC discussions very akin to in-lining ones: One party says he needs the performance, the other says he won't in 6 month. One party says he lacks the memory, the other says wait 6 month... The truth of the matter is, that if you're so restricted to very slowly improving hardware (automobiles, government\military, network infrastructure...), then even C++ is probably the wrong choice and you should be using C or even Ada exclusively.

              As for D:
              Code:
              import core.memory;
              
              void main(string[] args) {
                GC.disable;
                // PROFIT!
              }
              Now, you are restricted to not using certain array operations like allocation and concatenation in the scope. More importantly, both D and golang have facilities to incorporate C code and functions so you can write the code you need to manually manage in C, but enjoy the superior syntax and features for everything else.

              From experience, I've written a few toy databases and even some low level interrupt calls using golang with ASM and C parts glued-in pretty painlessly. I didn't bother doing the same in D since, like I said, I'm not a fun of those types of languages so I don't use them for personal projects. But, it was recently done in a commercial game engine so it shouldn't be too difficult: https://www.youtube.com/watch?v=FKceA691Wcg
              Last edited by c117152; 20 June 2013, 04:31 AM. Reason: typo and such

              Comment


              • #27
                One other thing that I really like about D is some of the libraries for it. Phobos by itself is really nice (with special structs, functions and templates for handling all sorts of things, for example time and date). The Deimos project is useful (bindings for all sorts of libraries, including ncurses, OpenGL, core X11, Cairo, even systemd). And I particularly like LuaD, which provides a whole seamless interface to Lua code. Normally when interfacing Lua with C, you have to deal with passing objects through the stack, and that means writing a lot of boilerplate code (so much that when I had it set up, there was more boilerplate than the actual program logic...). And with LuaD, everything is wrapped into native D types in a very straight-forward and convenient manner. Instead of pushing things to the stack, making sure that you receive something that you wanted, then copying it where you want, then popping, all you need to do is simply lua.get!DesiredType("VariableName"), which is neatly achieved by using templates. So using Lua code from D (and vice versa) is much faster and easier that way than from C.

                Comment


                • #28
                  The problem with GC is not speed, but lack of control, IMHO. I want to know when things get collected, for low level programming.
                  The real problem with GC is that it means you are throwing out any concept that you control or are even usually aware of ownership. Speed isn't the issue; never has been.

                  Smart pointers don't just make things safer. They also very clearly identify who owns what, when, and how. This is important not just in managing these resources but also in clearly architecting and maintaining a codebase. It's far too easy with a GC to use the natural "shared ownership" that GC is all about. If object A is responsible for object B but object C retains a reference to B, then B is not destroyed until C lets go of it in a GC.

                  If you've ever written a large long-running app in C#/Java/etc. you know the kinds of ownership problems that GC creates. Sometimes you interface with reference-counted libraries and you still end up with cycles. Sometimes you end up with things like C# delegates that form strong references in system-owned objects that you still have to manually clean up. Sometimes you have non-memory resources that you need explicit lifetime control over and need RAII (in contexts larger than what `using` statements allow). Sometimes you still leak because you create a reference chain or other collection that you keep adding to but never removing from.

                  Safety is an important thing, but it's hardly the only things. GC systems strip away the responsibility and to a degree the capability of stating ownership. That's a bigger problem than anything else in large, complex systems software (where "systems" doesn't just mean the kernel and libc, but also larger foundational codebases... like a game engine).

                  In C++11, it's easy to say that you should very rarely use a raw pointer and should be suspect whenever you do. The resulting syntax is a bit cludgy - it breaks my "make it easiest to do the most common/correct things" rule - but the resulting program structure is very strong. A true C++ replacement would likely just encode smart handles more directly into the language. However, as another thing most other languages get wrong (though D does okay at), a C++ replacement can't just hard-code everything; a very large point of C++ is that one can write their own containers, smart pointers, and so on.

                  Another problem with GC on that front is that it enforces a singular memory allocator. If you want really fine-grained control of fragmentation, allocation patterns, etc., too damn bad. D and C# make it easier to use things like pools and contiguous memory than something like Java or Python, but they still fall short of C++.

                  Especially as we move into the realm of heterogeneous computing. Resource management (remember it's not just memory management, which is what GC is primarily intended for) is ever so much more than the CPU-side shenanigans that GC still has trouble getting right.

                  Comment


                  • #29
                    Originally posted by Marc Driftmeyer View Post
                    The ignorance of C/ObjC in LLVM/Clang and Cocoa on this board continues to shine on.
                    Yeah, them being Mac-exclusive and this being a Linux board kinda makes that much obvious.

                    GNUstep is far from complete, gcc's objc support is lacking, and so on.

                    Comment


                    • #30
                      Originally posted by elanthis View Post
                      Another problem with GC on that front is that it enforces a singular memory allocator. If you want really fine-grained control of fragmentation, allocation patterns, etc., too damn bad. D and C# make it easier to use things like pools and contiguous memory than something like Java or Python, but they still fall short of C++.
                      Have you watched the presentation I linked to? Because there it was suggested that it should be possible to define how the GC works, either by environment variables or compiler switches.

                      Comment

                      Working...
                      X