Announcement

Collapse
No announcement yet.

C++20 Making Progress On Modules, Memory Model Updates

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by Kushan View Post

    The preprocessor isn't the issue, the issue is that it's utterly abused. It's simple, but often used for complicated gubbins that nobody understands and is incredibly brittle because the preprocessor is not in line with the goals and ideals of C++ itself (i.e. it's not type safe).

    Other languages do it better - they still have your #if statements but that's it, that's all you need to give yourself flexibility and configuration. For everything else, use constexpr.
    Agree, I think some kind of middle ground is the sweet spot.
    Originally posted by michal

    writing cross platform code is possible in languages that don't have a pre processor.
    Yeah, but you need it for C _and_ C++ still. That's not going away unless you radically reform the entire C _and_ C++ ecosystem.

    Comment


    • #42
      Originally posted by Weasel View Post
      ...and apparently not easy enough for people like you to understand.

      Right shift truncates towards -inf for negative numbers, while division truncates towards 0.
      Yes, I know. Many people don't. That is another good reason to insist on using "x / 8" when you mean divide by 8 - the compiler will get it right, while someone "optimising" it to "x >> 3" will get it wrong. They should have written "(x + ((x < 0) ? 7 : 0)) >> 3".

      (If you like, you can disagree with the C standard's decision to define division as truncate towards 0. I would be just as happy with either type of truncation. But that is the way it is defined in C.)

      So no, it's actually not easy AT ALL for the compiler to optimize. If it can't prove the numbers are strictly positive (in which case you should shoot yourself for using a signed type), then it will generate about 4 instructions for what should be just 1.
      It is quite easy, and compilers have been optimising it for a very long time. You are right that division of signed integers by powers of two usually requires a couple of instructions more than a right-shift would have done - but division makes it easier for the compiler to do other reasoning and other simplifications or optimisations. You win some, you lose some.

      No matter what gets generated, the small details of code generation here are an irrelevant detail compared to the improvement in code readability and correctness.

      But people like you just parrot and preach crap they don't understand (but WANT to believe it's true) and pollute sites like stackoverflow with wrong information.
      "People like you" ? You don't like the discussion, so you want to reduce it to insults?

      I do understand this, I am not parroting, and it is not crap. The C world is full of truly dreadful code written by smart-arses who think "bit twiddling tricks" are a good idea. There are very occasional cases where you want to do everything possible to get the very fastest code, and where tricks or "clever" code makes sense. But those cases are rare - for the vast majority of code, you will get more efficient results by writing clean, clear, readable code that says naturally what you mean, and letting the compiler optimise the details. The big win, of course, is that the code is readable, maintainable, correct, and easily seen to be correct.

      x / 8 truly generates worse code for signed integers, and it only gets worse with other types of divisors (non power of 2). Keep believing it's "easy" for the compiler to optimize tho.
      I can certainly agree that the compiler can generate tighter code for unsigned divisions than for signed divisions, if that is what you mean. But yes, optimising signed integer division is easy for compilers.

      Comment


      • #43
        Originally posted by andreano View Post

        It's not just about right shift. Comparions between signed and unsigned integers triggers the dreaded -Wsign-compare warning on gcc, even for equality comparisons (==, !=), which, when you know that the target CPU is two's complement, does exactly the same for signed and unsigned integers – a false positive!

        Now that the compiler agrees that the target CPU is two's complement, I'm hoping we can finally compile with -Wextra without seeing that warning inappropriately.
        No, that won't be the case. On a 32-bit int system, "(-1 == 0xffffffff)" returns true due to the way integer conversions work (regardless of the representation of signed integers). This is clearly logical nonsense. The two values have the same underlying representation, but they are not the same numbers. So the warning should stand.

        Don't get me wrong here - I like the restriction to two's complement being the only representation supported by modern C and C++. I'd like padding bits to be banned too (except for bool), char to be restricted to 8, 16 or 32 bits, and int to be 16-bit or 32-bit. I don't think this will make a noticeable difference to most programming, but it will mean that code that relies on these features today (most code handling any kind of binary interchange of data) will be viewed as more portable.

        The big fear here is that this change will mean some people will think that signed integer overflow is now defined as two's complement wrapping - it is not, and it should not be.

        Comment


        • #44
          Originally posted by michal
          writing cross platform code is possible in languages that don't have a pre processor.
          And are those languages "cross plattform" enough to run on DSPs that have weird stuff like non power-of-two bittness?
          Can they use alternative standard libraries?
          Can they change allocation strategies (per container)?

          The more assumptions you make, the more restricted you are. C/C++ certainly are not in the Goldilocks zone for desktops or modern mobile systems, but they are still the one language that will be able to run on the more exotic platforms.
          Want a restricted environment for C/C++, then use a framework like Qt which is easier to deal with.

          Comment


          • #45
            Originally posted by michal

            writing cross platform code is possible in languages that don't have a pre processor.
            Like what? Remember, Java and C#/VB.NET can only run on *one* platform. Their respective VMs (.NET and JVM). These in turn can happen to run on a limited selection of physical platforms only because because they are themselves written in the only real choices for cross platform work, C or C++.

            The closest you can get is deriving i.e from Socket and having something like PosixSocket and WinsockSocket and then using the build system to do the cross platform stuff for you by including one or the other. This still isn't cross platform code. This is a cross platform build system .

            Though you can add a preprocessor to most languages. Even to replace the really weak one in C#. Just pass the source files through CPP (C Preprocessor) or M4 as part of the build system.
            Last edited by kpedersen; 15 November 2018, 10:39 AM.

            Comment


            • #46
              Originally posted by DavidBrown View Post
              Yes, I know. Many people don't. That is another good reason to insist on using "x / 8" when you mean divide by 8 - the compiler will get it right, while someone "optimising" it to "x >> 3" will get it wrong. They should have written "(x + ((x < 0) ? 7 : 0)) >> 3".

              (If you like, you can disagree with the C standard's decision to define division as truncate towards 0. I would be just as happy with either type of truncation. But that is the way it is defined in C.)
              Get what wrong? Most of the time, the programmer knows they're an exact multiple of the result, especially when using negative numbers. In these cases, the generated code will be shit compared to the alternative of using x >> 3.

              Now, it actually makes even less sense to do x / 8 if, for example, you use it to truncate some bit offsets to byte offsets. Not only will the generated code be terrible, it will also be wrong. Consider what happens when your bit offset is 7, you end up with 0, all the way down to 0 (so 8 values). When it's 8, you end up with 1. But when it's -1, you end up with... 0 still, when it should have been -1. Now you have 15 offsets that all go to 0. Why is 0 special, again?

              Originally posted by DavidBrown View Post
              It is quite easy, and compilers have been optimising it for a very long time.
              Optimizing what? I told you already that they generate about 4 instructions compared to 1, due to the negative truncation bullshit. What else is there to optimize?!??

              Originally posted by DavidBrown View Post
              You are right that division of signed integers by powers of two usually requires a couple of instructions more than a right-shift would have done - but division makes it easier for the compiler to do other reasoning and other simplifications or optimisations. You win some, you lose some.
              Citation needed.

              Originally posted by DavidBrown View Post
              No matter what gets generated, the small details of code generation here are an irrelevant detail compared to the improvement in code readability and correctness.
              As mentioned above, x / 8 is actually wrong in one such possible example (8 being bits in a byte, so it fits naturally).

              What is "code correctness"? Wrong code?

              Originally posted by DavidBrown View Post
              "People like you" ? You don't like the discussion, so you want to reduce it to insults?
              There's nothing to discuss when you throw buzzwords like "code readability", "code correctness" (when it's not even correct in some cases), or "code maintainability". Literally those are always used by people who refuse to adapt or budge even when wrong, and have no arguments left.

              It doesn't matter how well someone disagrees, you can just pull the "code readability" or "maintainability" card, and that's it. There is no discussion when using those words because they are just opinions.

              Buit guess what? I can pull it too. x >> 3 is more maintainable to me and more readable in some cases like the above.

              So what's the point of discussing then?

              Originally posted by DavidBrown View Post
              I do understand this, I am not parroting, and it is not crap. The C world is full of truly dreadful code written by smart-arses who think "bit twiddling tricks" are a good idea. There are very occasional cases where you want to do everything possible to get the very fastest code, and where tricks or "clever" code makes sense. But those cases are rare - for the vast majority of code, you will get more efficient results by writing clean, clear, readable code that says naturally what you mean, and letting the compiler optimise the details. The big win, of course, is that the code is readable, maintainable, correct, and easily seen to be correct.
              See above.

              The entire world is full of truly dreadful code written by people who code like retards will see their code and must make sure retards will understand it. No wonder software is more dreadful as more time passes, we are losing the "smart-asses" and the "bit twiddling hacks" people who coded proper software (not due to the bit hacks, but because by coding the hacks, it proved that they are COMPETENT).

              I've LITERALLY seen people who use a freaking LOOP to add a constant up instead of a multiplication, "because it is more readable and easier for first grade drop outs". "Code like a 6 year old is going to read it".

              I mean, clearly multiplication is too hard for some people, just like right shift. It's evil. Use addition loops instead.

              Seriously, I can say the exact same thing you said, but use x >> 3 as "readable, maintainable, correct, and easily seen to be correct". It's not my fault other people are too dumb to understand the beauty in "x >> 3" maybe they should let go of their prejudice and start learning C properly.

              Remember one thing. Software is not meant to be read by incompetents. The world has shitter software than ever because people have this misconception that anyone -- even a first schooler dropout -- should understand the code and be "as simple as possible". No, software, especially C, needs to be written for competent people only.

              In some cases yes I do agree -- some hacks or clever hacks can be unmaintainable (hard to change). I agree there. But a right shift? It literally does NOT require more thinking process than a division.

              The fact that you think it does only shows how dumb some programmers are that they won't even learn the operations properly (remember the multiplication above, same thing). Such a thing must NOT be encouraged. Filter them out.

              Originally posted by DavidBrown View Post
              I can certainly agree that the compiler can generate tighter code for unsigned divisions than for signed divisions, if that is what you mean. But yes, optimising signed integer division is easy for compilers.
              It's not. Not until it can generate only one non-division instruction, at least for powers of 2. (and at most 3 non-division instructions for arbitrary divisors)
              Last edited by Weasel; 15 November 2018, 01:15 PM.

              Comment


              • #47
                Originally posted by michal

                writing cross platform code is possible in languages that don't have a pre processor.
                absolutely correct. This is why Rust exists - it is in essence what C++ is evolved into over 20+ years, except without the baggage.

                Comment

                Working...
                X