Announcement

Collapse
No announcement yet.

The D Language Front-End Is Trying Now To Get Into GCC 9

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by pal666 View Post
    for example, "static if" was pitched by alexandrescu, it became "if constexpr". this does not mean c++ adopts only features from d
    Very well, wasn't aware of that.
    your wording is bad. module system does not "use" even compiler parser, module is at least preparsed. but it can export macros for example.
    Well it's hard to guess what the tools will do. For example tools like IDEs and build systems need to "understand" the whole program structure as much as possible. It's really helpful if they are able to see that something is a dependency, not just a random macro.

    template instantiation in any template heavy code takes orders of magnitude more time.
    But not all code needs to be template heavy. And templates only need to be processed once. I think you'd get multiple symbols if the template gets instantiated twice into object code. So the more complex template beasts produce the code only once while #includes may need multiple passes.
    and (fast)code generation also takes orders of magnitude more time.
    Again that's something out of scope when discussing the speed of compilation, language1 vs language2. For example when compiling C++ or D, they could both use pretty much the same backends and optimizations.

    the issue with preprocessor is that it is not c++, it is just some dumb text replacement which knows nothing about your program.
    Nowadays the preprocessor also keeps track of type information, at least in C.

    contracts is largest feature adopted by c++20 so far. contracts are assert on steroids, they complement testing. and they affect code generation (they give information to compiler)
    I'm not aware of languages / compilers where contract information affects code generation. Might be possible in some cases, but the pre/post conditions and class invariants proposed by D are pretty weak guarantees overall. They might be more powerful in pure functional code, but my experience with Aspect Java is that the contracts don't offer much that can't be done with asserts and junit fixtures. Contract inheritance is nice though, but even that is pretty basic plumbing with method overrides. I studied D when 2.0 was released and back then it didn't do even contract inheritance properly. Maybe that has been fixed. The problem was, the parent class contracts were replaced when you should accumulate post conditions and loosen the pre conditions.
    Last edited by caligula; 19 September 2018, 02:01 PM.

    Comment


    • #22
      Originally posted by linner View Post
      Is Walter still working on D? If so, man, that guy is tenacious!

      I messed around with D way back before even version 1. It was a good idea and fun but suffered from a poor compiler environment. DMD just didn't cut it and gcc produced faster code. It took so long to start getting decent backends that I gave up. By the time they had something I didn't care any more because new C and C++ standards were revitalizing them and scripting languages were performing better than ever.

      Frankly, if you want to program in an odd-ball static language then do either Rust or Go. Go also has a poor compiler/back-end environment but at least it's supported by a lot of money (for now). Rust suffers from the Scala/OCaml/et al problem in that it supports too many programming paradigms which ends up making even your own code difficult to understand if you leave it for a while. I really like functional programming. I'm an old-school Erlang developer. However, the way it's mixed in to these other languages is just complicated.

      At the end of the day you just can't beat plain old C or C++ when it makes sense. At least for low-level stuff. At the high level you aren't messing with compiled languages in the first place.

      Code optimization can take time to implement and not essential to a working compiler, so often gets done last. A strategy to avoiding that is to write your compiler to target C itself as the backend, you can benefit from the C compilers code optimizer plus you get support for most architectures on the planet with one code generator. Downside: its slower to compile a program, but this is a one time hit.

      Multi paradigm languages is not a problem if you pick and choose what features you need to use and use a subset of the features. Just because a language has a feature no one is forcing you to use it. A language is a toolbox of features and should offer far more features than a programmer needs for a project, if it is to be a general purpose application language. People dont realize that becuase you dont need a feature for your project doesnt mean there is not some edge case where it is needed, and also that because a language provides a feature, does not mean you should use it.

      C is really only safe to use if you use a library that provides safe string and pointer features. Otherwise, its almost impossible to write safe code in C, especially where large numbers of people are involved and the software is large and complex. If you insist on using C, document what you are doing so other people can maintain the code. But almost no one does that. It seems the "right" way to write open source code is to write dense C code and then document nothing. someone else tries to add to the code without understanding what they are doing and you end up with huge bugs. They forget that free function or whatever needed by whatever memory convention you are using,.

      Comment


      • #23
        Originally posted by jpg44 View Post
        Otherwise, its almost impossible to write safe code in C
        Maybe for trash tier "software developers" or "engineers".

        Comment


        • #24
          Originally posted by pal666 View Post
          no. time does not matter. template instantiation in any template heavy code takes orders of magnitude more time. and (fast)code generation also takes orders of magnitude more time. the issue with preprocessor is that it is not c++, it is just some dumb text replacement which knows nothing about your program.
          That's what makes it good. When you need to alter the SYNTAX.

          Originally posted by Michael_S View Post
          Thanks. I knew that optimization is a slower process, but my understanding of most C++ project development cycles is that the engineers use quick compiles most of the time and only the optimize compiles as they get closer to release and do release testing. I did not know that template instantiation was slow.
          It has nothing to do with optimization. This happens in the front-end. Parsing the header is not what's slow (copy-pasted via #include). It's the instantiation of the templates multiple times. However, it's not so important with high core CPUs since the task is so parallelizable. The problem is that every .cpp/.cxx file needs to re-instantiate the templates, but again, you can do it in parallel.

          To me modules are just overhyped and a shit thing to focus on when there's so many more interesting language features (not stupid library) that we need.

          Such as ability to overload operator dot.

          C++ committee always have shit priorities.

          Comment


          • #25
            Originally posted by Weasel View Post
            Maybe for trash tier "software developers" or "engineers".
            The Linux kernel, Chromium, Firefox, LibreOffice, ffmpeg, iOS, Android, Windows, games, LibreSSL, OpenSSH, virtualization hypervisor software, *everything* that's written in C or C++ gets security fixes for buffer overruns, race conditions, or use-after-free bugs. They get the fixes month after month, year after year, even with the smartest, top paid "software developers" and "engineers" working on them.

            Getting *most* of your C or C++ code memory safe isn't hard. Getting *all* of it memory safe stumps even the best people in our industry.

            Comment


            • #26
              Originally posted by Michael_S View Post
              The Linux kernel, Chromium, Firefox, LibreOffice, ffmpeg, iOS, Android, Windows, games, LibreSSL, OpenSSH, virtualization hypervisor software, *everything* that's written in C or C++ gets security fixes for buffer overruns, race conditions, or use-after-free bugs. They get the fixes month after month, year after year, even with the smartest, top paid "software developers" and "engineers" working on them.
              I don't think you understand the meaning of the word "impossible" considering what you just replied with.

              If the statement was more like "It's unlikely that any large project will have no security vulnerabilities", now that would make more sense (and no, don't kid yourself, keep thinking it's just C/C++ "bugs" that exist as security vulnerabilities, it's so laughable).

              Do you know what's even funnier?

              Stupid languages like Rust that enforce (in safe blocks) array bounds checks even on buffers that are CORRECTLY written and sized are actually MORE vulnerable to Spectre than properly written C/C++ applications which don't have redundant checks that are not needed for 99% of cases. Of course, no clueless technically-illiterate "developer" will actually admit this.

              Comment


              • #27
                Originally posted by Weasel View Post
                I don't think you understand the meaning of the word "impossible" considering what you just replied with.

                If the statement was more like "It's unlikely that any large project will have no security vulnerabilities", now that would make more sense (and no, don't kid yourself, keep thinking it's just C/C++ "bugs" that exist as security vulnerabilities, it's so laughable).
                Every large software project has security vulnerabilities of all kinds. If you re-read what I wrote, I never said anything about other kinds of security vulnerabilities. I specifically referenced buffer overruns and use-after-free.

                Projects written in languages with built in bounds checking and automatic memory management - either through garbage collection or the compile time mechanisms in Rust or ATS - don't have the buffer overrun and use-after-free vulnerabilities of C or C++. And those specific errors are common in C and C++ projects. Not just C and C++ coming out of Microsoft or Oracle, but C and C++ code in leading open source projects, Google, Cisco, all over.

                Originally posted by Weasel View Post
                Do you know what's even funnier?

                Stupid languages like Rust that enforce (in safe blocks) array bounds checks even on buffers that are CORRECTLY written and sized are actually MORE vulnerable to Spectre than properly written C/C++ applications which don't have redundant checks that are not needed for 99% of cases. Of course, no clueless technically-illiterate "developer" will actually admit this.
                Wait, so now you're blaming the Rust developers for mistakes made by Intel and AMD?

                And the whole point of those 'redundant checks' you reference is that 1% of cases where the developer specifically does screw up. To repeat my point, if good developers always got it right then bound checks would be useless. They aren't. I don't care how smart you are, everybody screws up sometimes.

                Comment


                • #28
                  Originally posted by caligula View Post
                  But not all code needs to be template heavy. And templates only need to be processed once. I think you'd get multiple symbols if the template gets instantiated twice into object code. So the more complex template beasts produce the code only once while #includes may need multiple passes.
                  templates are(can be) computationally complex. includes are trivial and use include guards except in special cases
                  Originally posted by caligula View Post
                  Again that's something out of scope when discussing the speed of compilation, language1 vs language2. For example when compiling C++ or D, they could both use pretty much the same backends and optimizations.
                  and preprocessing will take negligible amount of time. you can easily check it by timing g++ -E and then timing compiling resulting file.
                  Originally posted by caligula View Post
                  Nowadays the preprocessor also keeps track of type information, at least in C.
                  i highly doubt that because it doesn't in c++ and c++ is much more type-aware
                  Originally posted by caligula View Post
                  I'm not aware of languages / compilers where contract information affects code generation.
                  c++20. contract asserts some axiom, optimizer could take it into consideration
                  Last edited by pal666; 21 September 2018, 09:42 PM.

                  Comment


                  • #29
                    Originally posted by Weasel View Post
                    That's what makes it good. When you need to alter the SYNTAX.
                    syntax you are alerting is not seen by compiler. that's what makes it bad
                    Originally posted by Weasel View Post
                    To me modules are just overhyped and a shit thing to focus on when there's so many more interesting language features (not stupid library) that we need.

                    Such as ability to overload operator dot.
                    modules are intended to greatly reduce compilation time for everyone ( https://xkcd.com/303/ ). and to make builds more predictable (not affected by random macro definition or include order). operator dot is nice, but niche feature
                    Originally posted by Weasel View Post
                    C++ committee always have shit priorities.
                    feel free to join and change priorities
                    Last edited by pal666; 21 September 2018, 09:46 PM.

                    Comment


                    • #30
                      Originally posted by pal666 View Post
                      syntax you are alerting is not seen by compiler. that's what makes it bad
                      It is seen by the compiler, just not the language parser. So warnings are given in certain cases. The age where preprocessor was separate from the compiler are long gone.

                      Of course, to be able to alter syntax you must do it before the language parser. This wouldn't be a problem if the parser language actually had those features, but it doesn't, so we need preprocessor, because the committee sucks.

                      The preprocessor is the tool you use when the language doesn't fit your needs. And it doesn't fit your needs when some moron in the committee decided to not take your features as a priority.

                      Look at "X Macros" for an example of a very sane and very important use of the preprocessor.

                      Why are they needed? Because you want to define stuff in one part of the source, and yet the language parser cannot define identifiers at constexpr evaluation. It can only define expressions. For example, if you want to define some enums with certain names and constant values but also other data related to them, and have them defined in one place like this in one file:
                      Code:
                      X(foo, 42, "some data", 123456, some_function)    // data associated with enum constant foo
                      X(bar, 13, "some other data", 5, some_other_function)    // data associated with enum constant bar
                      You cannot do this in C++ with any compile features. All of them can be done, except for the enum, because you can't "loop" through a table and define identifiers with any constexpr construct (foo and bar enums are identifiers).

                      Instead, X is a macro defined in one spot, where constexpr handles all the data attached -- but the enums are defined in another spot.

                      So you have to re-define the X macro and #include that file (with their definitions) twice. One time for the data, second time for the enums.

                      This keeps their associated data localized to just one file and is such a great design. Even GCC's sources uses it. So preprocessor is just awesome and needed, I don't care what rookie or trash programmers think with their crappy languages.

                      Originally posted by pal666 View Post
                      modules are intended to greatly reduce compilation time for everyone ( https://xkcd.com/303/ ). and to make builds more predictable (not affected by random macro definition or include order).
                      Just get a fucking 32-core CPU and stop whining about compilation times.

                      Originally posted by pal666 View Post
                      operator dot is nice, but niche feature
                      C++ is a very mature language at this point. All good features are "niche" at this point. Modules are just a convenience, but some features are much more important.

                      Originally posted by pal666 View Post
                      feel free to join and change priorities
                      Doesn't matter, I won't be the one deciding regardless.

                      Comment

                      Working...
                      X