Announcement

Collapse
No announcement yet.

C++20 Being Wrapped Up, C++23 In Planning

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    It's difficult to see a very bright future for C++, because what industry demands is for you to write software as fast as possible, with as little understanding as possible.
    To accomplish that C++ is probably one of the worst tools you could pick up.

    Comment


    • #62
      Originally posted by DavidBrown View Post
      You are making the two classic mistakes about signed integer overflow here. First, you think C left it undefined in order to support odd hardware. Second, you think defining it would be better than leaving it undefined. And though you haven't written it, I'm guessing you also make the third one of thinking that two's complement is the "natural" or "obvious" representation of signed integers, and that wrapping is the "natural" or "obvious" overflow behaviour. (If that assumption is wrong, then I apologise.)

      (I'm referring to C here, but C++ inherits the same behaviour.)

      Let's consider what the original C designers had to think about regarding signed integers. They had to support different formats - two's complement without padding was common, but not universal at the time. Different hardware had different ways of handling overflow. But was that why they picked "undefined behaviour" as the result of signed integer overflow? No, it was not. C supports a wide range of hardware. Where different hardware has different effects, and it is useful to know the effects and use them, C gives them "implementation defined behaviour". That means the compiler must document what it does in these cases, and be consistent about it. Conversion of an unsigned value to a signed type is implementation defined (if the signed type cannot represent the value directly). If the C designers considered signed overflow to be a useful feature which might be hard to implement consistently between different machines it too would be implementation defined. Instead, the language designers realised that overflowing signed arithmetic is simply wrong - it doesn't make sense. There is no right answer, so there is no definition of it in C.

      You must remember here that C is not primarily defined as a way to generate code for a processor. It is not defined in terms of the underlying CPU or hardware. It is an abstract language, defined in terms of an abstract machine. It is (contrary to popular misunderstandings) a high-level language, not a low-level language or a "universal assembler". But it is defined in a way that makes it efficient to implement, so that people can use it instead of assembler or other low-level languages. So the designers understood that if you have two integers, and you add them, you want the result to be the mathematically correct result. If the language can't give you the correct result, it can't help - any answer would be wrong, so there is no point in giving you one.

      The reason most hardware uses two's complement signed integer is not because it makes particular sense as a way of storing signed data. It is simply the easiest and cheapest method in hardware. The reason signed overflow is wrapping in hardware is not because it is useful (except in a few specific cases), but because it means the same hardware and same instructions can be used for signed and unsigned arithmetic, for multi-word arithmetic, and for both addition and subtraction.

      Like many people, you want signed integer overflow to be defined behaviour. But I suspect that like most who want this, you haven't actually thought about /why/ you want it to be defined, and the consequences of defining it. Tell me, when would you want to have overflow give a specific value? Under what circumstances would it make sense to add 2 billion to 2 billion and get minus 300 million? It makes no sense. It is almost never helpful - it is almost invariably a mistake. If your signed arithmetic overflows, you are going to get nonsense results - unless you have written code specifically expecting this, it's nasal demons however the language defines it. Alternative handling of overflow, such as saturation, throwing a C++ exception, trapping, setting errno, etc., would likely be much more useful - but significantly more costly in run-time performance. When a language defines overflow as wrapping, such as Java does, it loses these options. When it is undefined, like in C, tools can change that behaviour - you can add run-time checks in the tool to find bugs. By leaving signed overflow undefined, the developer has better tools to find and eliminate bugs in the code. To me, that is the important point - optimisation of code based on the knowledge that undefined behaviour does not occur is just a bonus.

      The idea of making signed integer arithmetic overflow defined as two's wrapping (or at least as implementation defined) comes up in preparation for every new version of the C and C++ standards. Every time, there are people who want to define the behaviour because they think it will make programmers' lives easier or eliminate bugs. Every time, the proposals are rejected because it would make programmers' lives harder and make it harder to spot bugs (as well as making code less efficient).
      First of all, it seems to me that much like those who are claiming that signed integer overflow was made undefined for performance reasons, you are attempting to rewrite the history of the C language in a direction that supports your point.

      An evidence of this is that although you readily appeal to wrapping overflow defying mathematical common sense as a motivation for making it undefined, you never question the fact that unlike signed integer overflow, unsigned integer overflow is well-defined to be wrapping in C. To me, this inconsistency is a clear-cut sign that your theory of the thought process that the C designers went through is incorrect.

      (And it is also arguably another wart of C/++, as it means that a seemingly innocent program refactoring that turns unsigned integers into signed ones can add new avenues for undefined behavior, much to the surprise of programmers who are not well-versed in C standard idiosyncraties.)

      Further, your appeal to mathematical common sense seems in and of itself highly questionable to me for two reasons.

      First, as Weasel is pointing out in a separate discussion thread, finite-range wrapping integers are a perfectly well-defined object from a mathematical point of view. It just happens that they are not the set of integers that you learned about in primary school, and operate under a slightly different set of rules.

      Second, and perhaps most important, a programming language which pretends that these two sets are the same is making the same mistake that the Fortran designers historically made when they allowed optimizers to operate under the assumption that IEEE-754 floating-point numbers are the same thing as mathematical real numbers.

      At first sight, these highly abstracted designs may seem sensible, because they allow the programmer and the compiler to readily share a common mathematical model, which is widely agreed upon to be easier to think about than the actual mathematical model which the machine is operating upon. However, in practice, these designs cause major software porting and debugging issues, because they mean that compiler optimizer decisions (which are, by nature, hard to predict, unstable under small program modifications, and configuration-dependent) can have observable effects on the behavior and output of a program, sometimes to the point of making it go from a correct execution (from the point of view of the "set of all reals/integers" mathematical model) to an incorrect one.

      As a result, when programmers are enjoying a high degree of hardware standardization, as we do today, it is often better not to abstract away the actual mathematics that are being carried out by the hardware, and instead incorporate those in the language abstract machine, and let the programmer be aware of it and write code with full confidence that its execution will produce the same observable effects as a naive interpretation of the program would, even though the exact sequence of arithmetic instructions carried out by hardware will obviously differ.

      In my opinion, the case for deviating from this baseline rule is not well-motivated today for signed integers. Unlike, say, for memory abstraction, where the performance and portability benefits are clear.

      I also think that your idealistic depiction of undefined behavior as a way of not defining what shouldn't be defined is also gravely understating the damage done by making a commonly occuring programming language construct's behavior undefined.

      Due to the nature of undefined behavior (violation of an axiom which the compiler may or may not build upon), symptoms of undefined behavior are extremely difficult to debug. Any form of program intrumentation (e.g. adding a printf), change in compiler settings (e.g. different optimization settings, different HW architecture) or even change of program launch configuration (e.g. using a debugger) can make the symptoms vanish.

      And even the tools that are specifically meant to help developers diagnose undefined behavior issues (valgrind suite, sanitizers...) are still major programmer time sinks:
      • Each tool only touches a tiny fraction of the problem domain.
      • The tools don't compose well, and runs of many different tools may be required to find the "right" problem.
      • Most of them degrade application performance, to the point where the application may not be usable enough for the bug to be reproduced (or where it can vanish altogether, if it's timing-sensitive for example).
      • Tools which work on unmodified applications have huge amounts of false positives, tools which use compiler instrumentation require a major chunk of the software stack to be rebuilt with special flags.
      • As C is not very amenable to static analysis (otherwise compiler lints would save us all the time), most tools rely on dynamic analysis that will only fire if the right run-time conditions are met (with these conditions being obviously affected by the tools themselves).
      I'm not blaming the C designers for this part though, it is quite likely that no one would have even dreamt of the level of compiler optimizer sophistication that we can enjoy today back when the language was designed. The world changes, and that's why things like programming language designs must also be revisited from time to time...

      Finally, I think you are mistaken when you write that language specification authors must resort to undefined behavior in order to allow programmer tooling to change the behavior of a certain language construct.

      An easy counter to this claim is that many other forms of behavior that isn't well-defined by the spec (implementation-defined, unspecified...) can often be defined by programmer tooling as operating in whichever way is appropriate for the tool without violating the spec. In this sense, there is no need to jump all the way to undefined behavior here.

      A less trivial counter-argument, which you may find harder to accept if you belong to a standard committee, is that not every developer tooling that consumes programs written in the C language needs to operate in a manner which is in full and rigorous conformance of the C spec, and that's particularly true of development tools that don't leave the developer's machine and do not impact production.

      Now don't get me wrong, specification conformance is a very useful property in many respects, as it enables easier program portability and high-quality arguments between compiler and application developers for example. But like all desirable system design criteria it must also be balanced against other desirable properties such as...
      • Being able to try out innovations without going through the full overhead of a standardization commitee
      • Being able to ignore points of the specification which turned out to have unforeseen harmful consequences in practice
      This is why, to the best of my knowledge, there is no fully standard-conformant C or Fortran compiler in existence. All of those that I know of provide operating modes and extensions which violate the specification of their input languages. And in comparison to some widespread spec violations (e.g. how Intel compilers process IEEE-754 math in partial fast-math mode by default), changing the behavior of integer overflow certainly wouldn't be a huge deviation.
      Last edited by HadrienG; 07 March 2020, 11:17 AM.

      Comment


      • #63
        Originally posted by Weasel View Post
        You obviously have no idea what you're talking about. I suggest you read up on 2-adic numbers (or p-adic in general).
        And there, I think, we will end this particularly discussion.

        Comment


        • #64
          Originally posted by HadrienG View Post
          First of all, it seems to me that much like those who are claiming that signed integer overflow was made undefined for performance reasons, you are attempting to rewrite the history of the C language in a direction that supports your point.

          An evidence of this is that although you readily appeal to wrapping overflow defying mathematical common sense as a motivation for making it undefined, you never question the fact that unlike signed integer overflow, unsigned integer overflow is well-defined to be wrapping in C. To me, this inconsistency is a clear-cut sign that your theory of the thought process that the C designers went through is incorrect.
          I did not say signed integer overflow was made undefined for performance reasons. I said it was made that way because there is no sensible definition for the behaviour. Performance benefits are a side-effect, though I think the biggest benefit is that it allows tools to help find more errors in code.

          I also did not say that the wrapping behaviour of unsigned overflow is a good thing. I think that, like with signed overflow, the great majority of unsigned overflows are the result of bugs in the source code rather than being useful and intentional behaviour. However, it is occasionally helpful to have defined overflow behaviour, such as for wrapping counters and timers, or for multi-word arithmetic. I would prefer the language to have a way to specify exact ranges of validity for integer types, and different overflow models (undefined, wrapping, saturation, error handling, etc.) according to the programmer's need - but that would be too much for a relatively small language like C.

          Unsigned types in C are defined as modulo arithmetic types, intended to match exactly a set of N binary bits. Signed integer types are at a slightly higher level, and are more abstract.

          (And it is also arguably another wart of C/++, as it means that a seemingly innocent program refactoring that turns unsigned integers into signed ones can add new avenues for undefined behavior, much to the surprise of programmers who are not well-versed in C standard idiosyncraties.)
          Agreed. I am not convinced that the C rules for integer promotion and automatic conversions between signed and unsigned types are ideal. In fact, I am convinced that no choice would be ideal - any choice would be wrong in some cases. I prefer to have my compiler warn me about such mixes to be sure that I avoid them. I am trying to explain how C works - do not mistake that as an endorsement for all of its rules and design decisions. There is a great deal to like in C, but also plenty that (IMHO) would have been better done differently.

          Further, your appeal to mathematical common sense seems in and of itself highly questionable to me for two reasons.

          First, as Weasel is pointing out in a separate discussion thread, finite-range wrapping integers are a perfectly well-defined object from a mathematical point of view. It just happens that they are not the set of integers that you learned about in primary school, and operate under a slightly different set of rules.
          Certainly finite-range wrapping integers can be well-defined. Some languages (like Java) define their signed integer types this way. C does not. Every language's signed integer types are an attempt to model natural mathematical integers, and every choice of definition will come up short in some way. Some languages (like Python) avoid imposing artificial finite bounds, and thus get much closer to an accurate model - at significant cost in efficiency. Some (like Java) pick a model that gives defined values for almost all operations (except, at least, for division by 0), but through away many identities and rules of natural integers. Others (like C) have a model that keeps many identities but gives no defined values for operations that cannot be modelled correctly. Some (like Ada) have a model that will give you the correct answer when possible, and throw an error when no correct answer can be given (i.e., on overflow). And some (like Matlab) will saturate - often a useful result, but again it loses many identities.

          Some examples of these identities include:
          • If a >= 0 and b >= then (a + b) >= 0 and (a * b) >= 0
          • (a * b) / b = a
          • (a * b) * c = a * (b * c)
          These are true in C, but not in languages with wrapping signed integers. People writing code generally expect them to be true as well - code rarely copes well when variables that should only contain positive values suddenly end up negative. (It is also identities like this that let compilers optimise code when overflow is undefined.)

          Second, and perhaps most important, a programming language which pretends that these two sets are the same is making the same mistake that the Fortran designers historically made when they allowed optimizers to operate under the assumption that IEEE-754 floating-point numbers are the same thing as mathematical real numbers.
          That might be true, if C were a language that pretended its signed integers were an exact model for mathematical integers. I see no evidence to suggest that this is the case.

          At first sight, these highly abstracted designs may seem sensible, because they allow the programmer and the compiler to readily share a common mathematical model, which is widely agreed upon to be easier to think about than the actual mathematical model which the machine is operating upon. However, in practice, these designs cause major software porting and debugging issues, because they mean that compiler optimizer decisions (which are, by nature, hard to predict, unstable under small program modifications, and configuration-dependent) can have observable effects on the behavior and output of a program, sometimes to the point of making it go from a correct execution (from the point of view of the "set of all reals/integers" mathematical model) to an incorrect one.
          That paragraph jumbles several ideas. For any programming language, the programmer and the language (and therefore compiler) must share a common understanding of the mathematical model of its types. And the programmer must write code within that model. If the programmer stays within the model, the code will be correct (in the sense that the resulting binary will do what the programmer asked). If the programmer strays from the model, due to code mistakes or misunderstandings, then the result is likely to be incorrect. This applies to /all/ programming languages, with /all/ types of data. C is not some special case here. A programming language definition or standard (combined with any additions from the compiler in question) forms a contract with the programmer. If the programmer breaks the contract and tries to ask for something the language does not allow, he/she has no reason to expect any useful results.

          A compiler's optimiser works on the assumption that the programmer follows this contract. In most programming languages (C included), code size and timings are not considered observable behaviour. Baring bugs in the compiler (rare, but not unheard-of), optimisation does not change the specified observable behaviour of correct programs. If the behaviour of the program changes in an unspecified way, the source code was incorrect. (There are sequencing decisions in most languages that are unspecified, such as the ordering of events in different threads - that can of course change with different optimisations. But it can also change from different runs of the same binary.) Optimisation is most certainly /not/ unstable or hard to predict - if the source code is correct, the generated results will be correct regardless of optimisations.

          Porting code is similar. If the code is correctly written to the language standard, and to a common subset of the implementation-dependent behaviour of the two targets, then the result will be correct on both targets.

          I am not claiming this is always easy, or that C is as helpful as it could be here - it certainly is not. In particular, it can be extremely difficult to write some types of code in a way that is fully correct and portable, but also efficient. No one said that programming is easy, and no one said that C is a language suited to non-experts. But of all the challenges in porting or writing correct code, signed overflow behaviour should be an utterly negligible issue. Code where your integer arithmetic overflows is almost guaranteed to be wrong from the outset, regardless of optimisation or porting.

          As a result, when programmers are enjoying a high degree of hardware standardization, as we do today, it is often better not to abstract away the actual mathematics that are being carried out by the hardware, and instead incorporate those in the language abstract machine, and let the programmer be aware of it and write code with full confidence that its execution will produce the same observable effects as a naive interpretation of the program would, even though the exact sequence of arithmetic instructions carried out by hardware will obviously differ.
          For some aspects, I agree with you. Baring niche areas (like DSP programming), it makes a lot of sense to standardise on assumptions such as 8-bit bytes, and two's complement representation of signed integers. In many types of programming, you get additional standards and guarantees - if you are programming for modern Windows or for POSIX, you can assume "int" is at least 32-bit. These kinds of assumptions can be convenient and let you write simpler code.

          Signed integer overflow behaviour is a different matter. There are almost no situations in which it is useful. Giving it a definition in general would give disadvantages in limiting code analysis from spotting errors, to no benefit.

          In my opinion, the case for deviating from this baseline rule is not well-motivated today for signed integers. Unlike, say, for memory abstraction, where the performance and portability benefits are clear.

          I also think that your idealistic depiction of undefined behavior as a way of not defining what shouldn't be defined is also gravely understating the damage done by making a commonly occuring programming language construct's behavior undefined.
          I disagree. The damage is done by people writing code with bugs in languages (and tools) that do not provide run-time checking features.

          Compare it to cars. Many modern cars have automatic breaks, lane-change alarms, automatic gears, and all sorts of features to help you drive safely easily. High-performance sports cars, or very cheap simple cars, require a lot more skill to drive safely and smoothly - you need to change gears manually, you need to avoid stalling, you need to look behind you before reversing rather than using a camera. C is a language that requires skill, and manual work. You have a great deal of control, and a great deal of responsibility, and can get a great deal of efficiency. (Like with modern cars, modern languages can give you most of the efficiency with greater safety - that is an aim of C++, for example.)

          The bugs, crashes, security holes and other failings in code are because people write code that doesn't do what they want it to do, or what they think it does - /not/ because the language doesn't define the behaviour.

          A language that throws an error on integer overflow trades run-time efficiency for safety - it won't make the incorrect program correct, but it will limit the damage and help you find the problem. A language that defines the integer overflow has lower (but not zero) run-time costs, but gives you no help in finding the problem - it hides it, so that it propagates and perhaps causes more damage later. C, compiled normally, has the maximal run-time efficiency while also giving you no help in finding the problem, and propagates it. But because C does not define the behaviour, tools (such as gcc and clang) can provide debugging modes that catch the error at run-time. This is the best of all worlds. However, it requires that the programmer knows what they are doing.

          I think it is fair to say that a lot of people who program in C (or C++) are using the wrong language - they would be better using a safer language (Rust, Go, Python, etc.). It is also fair to say that a lot of programs that are written in C (or C++) would be better written in different languages. C programming should be left to people who know what they are doing. That does not mean that good C programmers don't make mistakes - they are human too. But good C programmers know how to use the language and tools to minimise the risks, and they are very unlikely to make silly and utterly unnecessary mistakes like overflowing their calculations.

          Due to the nature of undefined behavior (violation of an axiom which the compiler may or may not build upon), symptoms of undefined behavior are extremely difficult to debug. Any form of program intrumentation (e.g. adding a printf), change in compiler settings (e.g. different optimization settings, different HW architecture) or even change of program launch configuration (e.g. using a debugger) can make the symptoms vanish.
          That is true. But most cases of undefined behaviour are simply obvious programming bugs. If you have an array, don't try to use invalid indexes. If you have a pointer that might be null, check it before you use it. If you have a calculation, make sure your types are big enough for the results (including any intermediaries). This is programming basics, and applies for any language - avoiding undefined behaviour is no more than simply writing correct code. And checking that you have no undefined behaviour is simply checking that your code works - test it appropriately, use static analysis, code reviews, debugging tools, and whatever else helps. (Again, I am not claiming this is always easy.)

          And even the tools that are specifically meant to help developers diagnose undefined behavior issues (valgrind suite, sanitizers...) are still major programmer time sinks:
          • Each tool only touches a tiny fraction of the problem domain.
          • The tools don't compose well, and runs of many different tools may be required to find the "right" problem.
          • Most of them degrade application performance, to the point where the application may not be usable enough for the bug to be reproduced (or where it can vanish altogether, if it's timing-sensitive for example).
          • Tools which work on unmodified applications have huge amounts of false positives, tools which use compiler instrumentation require a major chunk of the software stack to be rebuilt with special flags.
          • As C is not very amenable to static analysis (otherwise compiler lints would save us all the time), most tools rely on dynamic analysis that will only fire if the right run-time conditions are met (with these conditions being obviously affected by the tools themselves).
          I don't disagree with these points. C programming - writing code in C that you know for sure is correct - is not a simple task. For a lot of programs, you could do the same job in Python in a small fraction of the time and code size, and the loss of run-time efficiency probably wouldn't matter. (Or use Rust, Go, or whatever your favourite alternative is. Or even C++ - good use of container classes, smart pointers, string classes, RAII, etc., eliminates a whole range of common bugs in C code.)

          I'm not blaming the C designers for this part though, it is quite likely that no one would have even dreamt of the level of compiler optimizer sophistication that we can enjoy today back when the language was designed. The world changes, and that's why things like programming language designs must also be revisited from time to time...
          C /is/ revisited from time to time - the C18 standard has just recently been published. The changes to C have mostly been minor since C99, but some things that needed changed have been. Notably absent is the undefined behaviour of signed integer overflow, as it is a good thing.



          Finally, I think you are mistaken when you write that language specification authors must resort to undefined behavior in order to allow programmer tooling to change the behavior of a certain language construct.

          An easy counter to this claim is that many other forms of behavior that isn't well-defined by the spec (implementation-defined, unspecified...) can often be defined by programmer tooling as operating in whichever way is appropriate for the tool without violating the spec. In this sense, there is no need to jump all the way to undefined behavior here.

          That is not quite right. In particular, behaviour that is implementation-defined must be fully defined and documented, and is often determined by the platform ABI rather than the compiler. Unspecified behaviour is a little more flexible, but the compiler must pick between the allowed options. Only undefined behaviour has full flexibility. If signed integer overflow were implementation defined, or defined to give an unspecified integer as a result, then a (conforming) compiler could not add run-time checks with error messages in the manner of gcc / clang sanitizers.

          Have you ever read the C standards? (One of the few changes in C18 is nicer typesetting, so go for that one!). In particular, Annex J lists all unspecified and implementation defined behaviour, and all explicitly undefined behaviour. For very few of the undefined behaviours is there any way you could give a reasonable definition. There is no way for the language to define the behaviour of trying to access an invalid pointer, or access an array out of bounds - the result could not be described in the language. There are some that could be defined as "the compiler should reject the code with an error message at compile time", and usually that is what good compilers do - the reason such behaviour is "undefined" is so that simple compilers can be made too.

          Signed integer overflow is one of the few cases where behaviour /could/ be changed from undefined to fully defined. I've given all the disadvantages that decision would have.

          Remember, /all/ languages have undefined behaviour. It is just that few languages are as honest and explicit about it as C. Anything that is not given defined behaviour in the C standards is, by definition, undefined behaviour as far as the language is concerned. The same applies to any language. (The behaviour can, of course, be defined elsewhere.)
          A less trivial counter-argument, which you may find harder to accept if you belong to a standard committee, is that not every developer tooling that consumes programs written in the C language needs to operate in a manner which is in full and rigorous conformance of the C spec, and that's particularly true of development tools that don't leave the developer's machine and do not impact production.
          Compilers are always free to define whatever behaviour they want, when the standards don't define behaviour. gcc and clang have "-fwrapv" that gives signed integers wrapping semantics. It has always been the intention of the C designers and standards committees that this be allowed. It has always been intended that the standards provide a common subset that all C compilers support, and all C programs can rely on. But C compilers can provide more features and tighter semantics, and C programs can be written in a non-portable manner to use those features from specific tools.

          Now don't get me wrong, specification conformance is a very useful property in many respects, as it enables easier program portability and high-quality arguments between compiler and application developers for example. But like all desirable system design criteria it must also be balanced against other desirable properties such as...
          • Being able to try out innovations without going through the full overhead of a standardization commitee
          • Being able to ignore points of the specification which turned out to have unforeseen harmful consequences in practice
          This is why, to the best of my knowledge, there is no fully standard-conformant C or Fortran compiler in existence. All of those that I know of provide operating modes and extensions which violate the specification of their input languages. And in comparison to some widespread spec violations (e.g. how Intel compilers process IEEE-754 math in partial fast-math mode by default), changing the behavior of integer overflow certainly wouldn't be a huge deviation.
          You are mixing up two different things here.

          First, compilers can support options or modes that violate the standards - that is to say, modes in which you can give it a conforming C program and the generated results don't match the expectations based on the standards. Most compilers have such non-conforming modes to some extent, and usually are non-conforming in their default modes. (For example, "int asm = 123;" is allowed in standard C, as "asm" is a valid identifier. But gcc treats "asm" as a keyword by default.)

          With such modes, you don't have a conforming compiler. gcc needs "-std=c11 -Wpedantic", or similar flags, to operate in a conforming mode. (Even then it is not quite perfect - close, but not exact.) Compilers are free to have conforming modes, and non-conforming modes. They should make it clear what you are getting, however.

          Secondly, compilers are free to add any semantics that are not defined by the standards. They can raise any undefined behaviour to defined, or unspecified. They can raise any unspecified behaviour to specified. They must raise any implementation defined behaviour to defined (and documented).

          This means any compiler is entirely free to define integer overflow behaviour in any way it wants.

          (And IEEE-754 maths conformance is not required by the C standards.)

          Comment


          • #65
            Originally posted by DavidBrown View Post
            And there, I think, we will end this particularly discussion.
            Then you should know that -1 is also represented as an infinite amount of 1s and the math itself "just works". You'd also know that the more zeros it has on the end, the closer it is to 0. And guess what happens when the power of 2 is so high it won't fit in X amount of bits? It might as well not be there, since it's so high it becomes irrelevant to what you need (lower bits).

            Damn, sounds like two's complement. "Magic".

            Comment


            • #66
              @DavidBrown

              Thanks for the very detailed and thoughtful answer. But here we're reaching a level of detail that, as I mentioned in a previous post, I don't have enough spare time to sustainably engage in as a hobby. So let's agree to disagree for now.

              Comment


              • #67
                Originally posted by HadrienG View Post
                @DavidBrown

                Thanks for the very detailed and thoughtful answer. But here we're reaching a level of detail that, as I mentioned in a previous post, I don't have enough spare time to sustainably engage in as a hobby. So let's agree to disagree for now.
                Fair enough. It's been thought-provoking, but it is time consuming.

                This is not really the ideal place to discuss this kind of thing - there are relatively few people who have the interest, experience and understanding for it. If you want to discuss more, you can try Usenet groups comp.lang.c and comp.lang.c++. There are more people who get involved in such threads, so you can see things from a wider viewpoint and hear more opinions.

                Comment

                Working...
                X