I'm not convinced that using anything other then the native width is more efficient (FOR CALCULATION!). Yes there is a cost/benefit analysis in other regards that are also very important, something the compile can't even know about. The data has to move in and out from the CPU over a bus. If the buss is narrower then the native size of the CPU, some magic will need to happen. If it stays internal to the CPU (registers) then the native size is most efficient in any case.
Using bigger int's however has the disadvantage it uses more memory, and that can potentially slow things down again (more data to copy into memory etc).
I do fully agree with you here and we should use u8, u16 etc as fit. And yes, the PROGRAMMER needs to know that his data fits into a certain variable. I never said the compiler should check for 'valid overflowing' it's a dumb technique but it does get used. Anyway I only wanted to say, the compiler can be made smart enough, to automatically replace u8 (or uint8, or whatever) with whatever it sees fit to be the fastest/least memory requirement (based on a switch if you wish). I as a developer know it will fit into an u8. The compiler can optimize it to whatever it wants.Even in the case of the slower 16 bit integers, you can employ the fast integer-types and benefit from well-thought-out typesets for specific architectures, ruling out potential slowdowns.
The compiler is a smart guy, but he is not a magician: He would never risk anything and he is no artificial intelligence. There may be constant improvements in this sector, but limiting the integer-size requires you to know that the integer would _never_ overflow.
How would a compiler predict that when he has to optimise a stdin-parser?
We will see 128 bit data types soon.. You know why? It already exists. First result for uint128 on google? http://msdn.microsoft.com/en-us/library/cc230384.aspx They even use a valid example. Store a ipv6 address. Yes you could store it in a struct, but that's besides the point. Actually, a 128bit int could potentially be more optimized so when using it in routers I can see it making sense. (I know it's a definition to two 64bits for now, but the datatype is there).PS: I don't think we will see 128 bit soon, because 64 bit-addresses can span virtual memory with the maximum size of 16 exbibyte, which is 1 trillion gibibytes.
But I may only sound like Bill Gates having allegedly stated this in 1981:
But there's actually more valid usage for it, today. http://lxr.free-electrons.com/source.../b128ops.h#L54
Yes it's in the kernel source already. Right now the only sensible thing is to use it in cryptographic routines. And it makes sense. If your encryption routine makes use of 128bit values, why do magic with structs and unions to work around a shortcoming?