Announcement

Collapse
No announcement yet.

D Language Still Showing Promise, Advancements

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by elanthis View Post
    _Bad programmers_ write sloppy code. IDEs make it easier for bad programmers to get by; you can't complain that a tool meant to make writing code easier is indiscriminate about whom it helps. It's the same horrible argument that Linus uses to attack C++: "Shitty programmers use C++ so I'm going to be elitist and stick to C." It's outright silly.
    There's an even simpler reason why that argument is outright silly: there are shitty C programmers, too, so he should stick to assembly, or maybe even plain binary. I'm one of them, actually (temporarily, I hope, just starting).

    Comment


    • #72
      Originally posted by elanthis View Post

      The problem is that, 5 years from now, who's to say we won't have an "even better" way to structure things that some smart-ass dropout from MIT has yet to think up but then for back-compat reasons that one-off syntax is already used up and in the way? With a library-only approach, we can just stop using the "old" way, like we did with std::auto_ptr. If that behavior had been baked into the language somehow, we'd be screwed.

      I get super frustrated by the C++ committee and some of their responses to even simple and improvements to the language (having proposed some myself, only one of which is being considered for standardization), but for my complaints at their sluggishness, there are very few post-standardization C++ changes I can disagree with. All the bits of C++ I despise came from C or pre-standardization C++. The process is excruciatingly slow, but it works. Compare to the constant "oops we done goofed" deprecations in PHP, the divisive releases of Python3 and Perl6, or the "hey, why did we add std.kitchen.sink and why is it semantically broken" nature of D/D2/Phobos/whatever-next-week-brings.

      The library approach is also handy because, sometimes, the language just gets it wrong, and a library allows replacement in ways that language builtins do not. We don't use std::unordered_map, for instance. It's a fine data structure if you need its particular set of constraints, but you can implement a significantly faster hash table if you don't mind accepting a somewhat different set of constraints. unordered_map was designed for a very generalized set of use cases and has efficiency limitations you maybe don't want and guarantees you maybe don't need. In a language with built-in dictionary types, you're stuck with whatever they provide. Hence the problem with C pointers and arrays; we can make replacements that are equally efficient but semantically superior, but now the "default" way to do things is stuck being used for the worse semantics. Having the foresight to predict those problems is unrealistic; being conservative and avoiding hard-coded decisions over semantics is your best option. (To a limit; sometimes you have to bite and bullet and just hope you made the right choice for decades to come, of course.)
      Do want to have some way to break compatibility gracefully.

      I'm on board with make it in libraries instead of syntax. The use of a slice is a relatively new concept.

      Your example of the C++ committee and release schedule of other programming languages demonstrates the fast and buggy or slow and good.
      Seems to be holding pretty good as a rule for programming languages, frameworks,...

      Your idea of progressing from libraries sounds like the way the khronos group introduces new functionality. (things start with extensions, then get promoted to core in steps and every steps allows for changing things)
      Seems to be working pretty well for them.

      Would a good rule of thumb be: if there is a trade-off or the concept is new, put it in a library instead of hard-coding?

      Maybe the library versus semantics is a false dichotomy. You can change the library but not the semantics.
      Let's have a system where we can change semantics, syntax, standard library in an orderly fashion. Have a language syntax version.
      Different language syntax version means breaking changes can happen to absolutely anything.
      Also care must be taken to avoid undefined behaviour. Stuff like that can really hurt development.


      Do stay with that I would like, in D programming language, to have both types; half-open and closed ways to input a slice.
      (And maybe other stuff too, haven't got around reading enough about all the language features.)
      Last edited by plonoma; 03 July 2013, 02:38 PM.

      Comment


      • #73
        Originally posted by mrugiero View Post
        There's an even simpler reason why that argument is outright silly: there are shitty C programmers, too, so he should stick to assembly, or maybe even plain binary. I'm one of them, actually (temporarily, I hope, just starting).
        But there are shitty assembly and plain binary programmers too!!
        What will we do now?

        Comment


        • #74
          Originally posted by plonoma View Post
          But there are shitty assembly and plain binary programmers too!!
          What will we do now?
          I guess get a piece of paper and do the math on dead tree with graphite.

          Comment


          • #75
            Originally posted by ciplogic View Post
            So what you're saying:
            - an average programmer using GC can be faster than an average programmer using "not managed memory" (whatever this means)
            - GCs have good throughput (maybe better than the malloc/new), but for games has to be avoided
            - saying about crap-loads of objects, it seems you imply that in C++ you use everywhere const-references, move semantics and such, and you spill everywhere with new in GC, and every time has to be a full blown class, or Generics list. I think you are aware that there is the struct keyword in C#. You have to use it at times, really
            Garbage collection is a vain attempt to protect poor programmers from their own bugs, which merely introduces a ton of new ways for them to write poor code.

            I don't use C#, but I've spent far more time trying to optimise Java garbage collection than I have hunting down memory allocation bugs in C++. It's a crazy language which is supposed to be object-oriented, but you can't afford to use objects if you need any kind of consistent performance becuase the garbage collector will stall you for long periods while it cleans them up, and where you have to give your application 1GB of RAM just in case the user needs it, but your application will crash if they ever need 1GB + 1 byte on a machine that has 20GB free.

            Comment


            • #76
              Originally posted by movieman View Post
              Garbage collection is a vain attempt to protect poor programmers from their own bugs, which merely introduces a ton of new ways for them to write poor code.

              I don't use C#, but I've spent far more time trying to optimise Java garbage collection than I have hunting down memory allocation bugs in C++. It's a crazy language which is supposed to be object-oriented, but you can't afford to use objects if you need any kind of consistent performance becuase the garbage collector will stall you for long periods while it cleans them up, and where you have to give your application 1GB of RAM just in case the user needs it, but your application will crash if they ever need 1GB + 1 byte on a machine that has 20GB free.
              Those are implementation shortcomings of Javas GC, what a good generational GC does is that every X object allocations you do a generation pass that takes microseconds, and you often never need to do full gc runs in the average use case. v8, for example, has a much better GC than the Oracle JVM, and I barely notice it run during memory profiling on Chrome.

              For the average application use case, old C++03 manual new / delete in the destructor was much more error prone than Java / C# new X and just try to reuse the objects you already allocated. Nowadays, I find smart pointers, even with the added seconds making them and instant of thought if I should be passing an owning or non-owning reference around are a tremendous productivity gain since I don't need to worry about my allocations crashing a VM.

              Comment


              • #77
                Originally posted by plonoma View Post
                But there are shitty assembly and plain binary programmers too!!
                What will we do now?
                That's the point :lol:

                Comment


                • #78
                  Originally posted by elanthis View Post

                  You have a dated idea of what a modern IDE can do (I did, having first used relics like TurboC++ and VS6 until years later trying VS2008+VAX) if you think Vim or Emacs is more powerful in ways that matter for software engineering. They can manipulate _text_ way better, sure, but a program is not really text. It is merely represented by text for our human consumption as we haven't figured out a more efficient medium for it yet. The compiler quickly converts that text into more abstract, semantically-meaningful representations as soon as it can. A good IDE does this as well. The IDE also understand that once you start debugging a program, it is _really_ not just text anymore. To do a complex refactoring with a good IDE, you hit a key-combo, maybe type in a new name or signature, and you're done; drink a coffee, watch the numbers tick on your bank account, or do whatever else you want while the Emacs user spends so time developing his one-off LISP macro to do the same (and then fix up all the false-positive matches it'll end up with). The IDE gives you a deeply integrated debugger with super painless visualization of values, drag-n-drop instruction pointers, etc. You can have visual debuggers for the modern GPU-heavy world, quite powerful multithreaded debuggers and visualizers, interactive charts and graphics, etc. The build system is integrated deeply in ways you literally can't even do with many UNIX-originated build systems.
                  you spend too much time running in debuggers. Unit tests and simulations are the only way to prove algorithm and program correctness. Make it break and make it break fast. Writing multithreaded code debug builds typically don't force crashes like optimized builds do, which is exactly what customers run. VS in particular is very bad at supporting lots and lots of little independent unit tests and simulations. What I work with requires a level of provability and verifiability since we have some engineering accuracy specs to hit.

                  About "garbage collection" of memory resources, the underlying system malloc/free can have a bigger impact than "smart pointer" management, ESPECIALLY with multithreading. This is especially true on windows, yes even win7 still. That's why there exists things like hoard, nedmalloc, tcmalloc, jemalloc, etc. On the linux side I don't see any gains with these allocators, on windows limiting use of things like iostreams, create/destroy std::vector, etc especially in iterative code lessens the need for these allocators.
                  Last edited by bnolsen; 03 July 2013, 10:02 PM.

                  Comment


                  • #79
                    Originally posted by plonoma View Post
                    But there are shitty assembly and plain binary programmers too!!
                    What will we do now?


                    Now get me an effin' butterfly!

                    Comment

                    Working...
                    X