Announcement

Collapse
No announcement yet.

C++17 Is Complete, Work On C++20 Is Getting Underway

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Kemosabe View Post
    Code:
    for (char byte : binaryfile) {
    // Oh look at that, I read the binary bytewise and it's a char type for some historic reasons. Sooo "hard to learn and non human understandable!"
    }
    Actually the biggest problem with old char is this: Is it signed or unsigned?

    std::byte is just shorter and more logical than always having to write unsigned char.

    Comment


    • #22
      Originally posted by coder View Post
      I have already run into a library that I can't use on a particular platform, because its compiler doesn't yet support some of the C++17 feature the author decided to use.
      Presumably that “particular platform” is that, shall we say, special-needs hanger-on of the computing world? One that serves as a shining example of how not to develop a computing platform?

      It’s quite common for these protocol/language standards to only be finalized after at least two independent implementations have already been a) demonstrated working and b) shown to be compatible. That way they know the spec makes sense, and just as important, it makes the same kind of sense to different people.

      Have you noticed that GCC is often at the forefront of this?

      Comment


      • #23
        For all you grandpas, we are supposed to use the best tools. Your elders made newer, better languages because they improved on the old. There is no reason why we should write code with security holes just so we look like some contratian who says "they don't build 'em like they used to" while he gets into his Ford pickup.
        Experts knew that C was crap back in the 80s. It's not like some stupid hipsters came along and started measing things up.
        These were days before the rise of the internet. Security really wasn't a serious concern until 2004.
        This reminds me of a James White quote. The KJV writers wouldn't be KJV onlyists if they were alive today.

        Comment


        • #24
          Originally posted by garegin View Post
          For all you grandpas, we are supposed to use the best tools. Your elders made newer, better languages because they improved on the old. There is no reason why we should write code with security holes just so we look like some contratian who says "they don't build 'em like they used to" while he gets into his Ford pickup.
          Experts knew that C was crap back in the 80s. It's not like some stupid hipsters came along and started measing things up.
          These were days before the rise of the internet. Security really wasn't a serious concern until 2004.
          This reminds me of a James White quote. The KJV writers wouldn't be KJV onlyists if they were alive today.
          Hm. You are not even mentioning C++! Understandable since it would diminish the tiny bit of a shadow of a point u made there. ;-)

          Comment


          • #25
            Originally posted by coder View Post
            It's not about agreeing whether feature XYZ is an improvement, it's about whether it offers enough benefit to introduce incompatibilities in code. I have already run into a library that I can't use on a particular platform, because its compiler doesn't yet support some of the C++17 feature the author decided to use.

            We can both agree on ways that a feature could be improved upon, and still disagree on whether it should be added to the language. Sorry, but I don't find any of the examples you gave to be sufficiently compelling.
            I cannot relate to that example at all. I have dealt with a fair share of compatibility issues but this has never been about libraries using modern C++. I can't rule out that this might happen but I'd estimate that for every problem of that nature there are 10 compatibility issues from old code using compiler/vendor-specific C++-features, and 20 compatibility issues from API/ABI-changes in dependencies.

            Also, the idea that a static C++ standard would lead to fewer incompatibilities, doesn't make much sense to me. Changes to the way we code are inherent, because the underlying problems change, the demands change and the techniques evolve. It is natural and, because of the increasing complexity of software and of the problems that this software is supposed to solve, in some cases the only responsible choice for programmers to adopt new ways of coding. If the standard standard would not support that, we would either see
            - vendor-specific compiler features, which introduce incompatibilities that lock their users to specific platforms and specific toolchains,
            - in-house macros to simulate compiler features, which are a nightmare to debug and a barrier for developers that are new to the code base or
            - libraries, that simulate compiler features via template constructs, which that shifts the problem from adopting a new version of the tool chain to adopting another dependency and has its own issues: licence issues, quality of the library, maintenance of the library, further dependencies and so on.

            Having those features go through standardisation is actually the way that introduces the fewest compatibility issues because recently the standards are getting adopted quicker by compiler vendors. Btw, I don't know which compiler you are using, but if it is either GCC or LLVM the C++17-support should be almost feature-complete by now. Unless you cannot update it for some reason, but this would be a problem in itself, regardless of standard changes.

            Comment


            • #26
              Originally posted by hansg View Post
              You either fix a language so it never changes again, and become irrelevant, or try to move with the times.
              This is obviously a false dichotomy. My complaints are with the frequency of new standards and the amount of fluffy, sugary changes.

              You guys seem to think new features & frequent changes are free. Maybe if you're an end consumer, living on the bleeding edge of compiler releases from Clang and GCC, I can see how you might think that. I currently have to support code on 3 platforms: an enterprise distro where we've been restricted from upgrading the compiler (currently at GCC 4.8), Android NDK (where GCC is deprecated and Clang is at 3.8 in the latest release), and a DSP with a temperamental C++ compiler that's still not up-to-date with C++11 conformance, and probably never will be.

              Even in transitioning some of our legacy code to C++11, we've had issues where library authors default to C++11 standard library constructs when available, otherwise falling back on Boost. If you don't think about it too hard, this sounds like a good idea. But it introduced hard-to-find bugs when not all code (it's a large codebase, with many libraries) was built in C++11 mode.

              Standards changes have a real cost, to many of us living in the real world. At some point, it could become hard to justify using C++ for new projects. This is how languages die.

              Originally posted by hansg View Post
              The elephant in the room isn't 2D graphics anyway; it's windowing systems and event processing. If the standard has that, you could write a full desktop application in standard C++ without needing any kind of system-specific code, but it will be a very tough nut to crack. Both because operating systems are so different, and because moving the event loop into the library is a very, very significant change.
              I don't agree that this is a goal the standard should try to tackle. What's wrong with using open source & vendor-provided libraries? They're much more responsive to changes in technologies and user needs.

              Originally posted by hansg View Post
              Restrict is a fine example of a keyword that will make an already bad situation worse because it is both easy to use incorrectly, and difficult to detect such incorrect uses (so the standard will happily drop you into undefined behaviour land).
              restrict is necessary to achieve good performance in code with arrays. C++'s niche is performance-oriented, so restrict is a necessary consequence of offering pointers. The fact that the standard excludes it doesn't keep people from using it, but its use in library code is diminished, where it could benefit many users with little risk.

              Originally posted by hansg View Post
              Oh, and let's not kid ourselves: compatibility between C and C++ is already pretty damn bad for anything that is not a trivial piece of code.
              A lot of code is still written in C. C compatibility has been a hallmark of C++ since its inception. I think the standard committee walks away from this, at their peril.
              Last edited by coder; 26 March 2017, 12:39 PM.

              Comment


              • #27
                Originally posted by tinko View Post
                If the standard standard would not support that, we would either see
                - vendor-specific compiler features, which introduce incompatibilities that lock their users to specific platforms and specific toolchains,
                - in-house macros to simulate compiler features, which are a nightmare to debug and a barrier for developers that are new to the code base or
                - libraries, that simulate compiler features via template constructs, which that shifts the problem from adopting a new version of the tool chain to adopting another dependency and has its own issues: licence issues, quality of the library, maintenance of the library, further dependencies and so on.
                I've used Boost for about 14 years. Most parts of it are pretty great. Many of the good improvements in the language have come from limitations encountered by its authors, and I agree with the idea of migrating some of its most popular and stable facilities into the standard library.

                But I firmly reject the idea that it's a high virtue to write code with no external library dependencies. Yes, you want to keep the number of dependencies low, but it's not worth bloating up the standard just so you can feel "clean". Boost proves that high-quality open source libraries are more than adequate at meeting user needs. All the standard really needs to do is make sure the library authors are unimpeded.

                Comment


                • #28


                  Sorry for my previous post. I forgot that the article was about C++, not C. Btw, the argument that it's so good because no one came up with anything better is a weak one. If it wasn't for MS rolling up their sleeves and making powershell, we would still be scraping text from output, instead of doing it programmatically. For decades, people were stuck with text based shells. (Yes, I know you can use a programming language like Pearl and awk, but they are not shell languages.)
                  I think the worse thing about C/C++ is the security. You can't really fix that by "mastering the language". And this became a problem in the industry only recently. The profuctivity gap has been bridged to a great degree.
                  If MS was writing Office from the start, you bet your ass they would be using .NET and not unmanaged C++.


                  http://trevorjim.com/lets-sunset-c-c++/

                  Comment


                  • #29
                    Originally posted by coder View Post
                    This is obviously a false dichotomy. My complaints are with the frequency of new standards and the amount of fluffy, sugary changes.

                    You guys seem to think new features & frequent changes are free. Maybe if you're an end consumer, living on the bleeding edge of compiler releases from Clang and GCC, I can see how you might think that. I currently have to support code on 3 platforms: an enterprise distro where we've been restricted from upgrading the compiler (currently at GCC 4.8), Android NDK (where GCC is deprecated and Clang is at 3.8 in the latest release), and a DSP with a temperamental C++ compiler that's still not up-to-date with C++11 conformance, and probably never will be.

                    Even in transitioning some of our legacy code to C++11, we've had issues where library authors default to C++11 standard library constructs when available, otherwise falling back on Boost. If you don't think about it too hard, this sounds like a good idea. But it introduced hard-to-find bugs when not all code (it's a large codebase, with many libraries) was built in C++11 mode.

                    Standards changes have a real cost, to many of us living in the real world. At some point, it could become hard to justify using C++ for new projects. This is how languages die.

                    I don't agree that this is a goal the standard should try to tackle. What's wrong with using open source & vendor-provided libraries? They're much more responsive to changes in technologies and user needs.

                    restrict is necessary to achieve good performance in code with arrays. C++'s niche is performance-oriented, so restrict is a necessary consequence of offering pointers. The fact that the standard excludes it doesn't keep people from using it, but its use in library code is diminished, where it could benefit many users with little risk.

                    A lot of code is still written in C. C compatibility has been a hallmark of C++ since its inception. I think the standard committee walks away from this, at their peril.
                    If you have to support old compilers, you have to write C++ without using the new features - it's unfortunate for you but I'm not sure what benefit you'd get by C++ not being updated for anyone else either? The problem you mention above seems like it would be fixed by just compiling everything on a given platform in the same way.

                    Comment


                    • #30
                      Originally posted by Kemosabe View Post

                      Sure it is but I understand the emotional flamewars to some extend. There are a handful of people who got proficient to some degree and can show years of experience. They manage to do really great things with their tools.
                      Then some newcomer hipster comes (often without in-depth knowledge of anything with the need to distinguish himself) and tells them it is all unnecessary deprecated and bad ...
                      I'm with you, but the flipside while people proficient with a language (or frameworks), they also get accustomed to its quirks. Often to such extent they don't notice them anymore. But the quirks are still there and they're still a pain for newcomers. So I really don't see a problem with discussing pros and cons of languages. The problem is keeping the discussion civil. And whoever manages to do that on the Internet, will get a Nobel peace prize, imho.

                      Comment

                      Working...
                      X