Originally posted by phoronix
View Post
Announcement
Collapse
No announcement yet.
Linux Kernel Moving Ahead With Going From C89 To C11 Code
Collapse
X
-
-
Originally posted by kylew77 View PostIn C/C++ programming classes in university I was always taught to declare all variables at the top of the function.
: )
That said, most big open source projects & companies have a coding style they want you to use. So, you'd probably end up having to do it the other way and you can see how you like that.
I take any style manual with a grain of salt. I usually agree with most of it, but I also enjoy the chance to have a spirited debate (even if only in my own mind) on areas where I disagree. It turns out that a lot of style points aren't absolutes, but rather tradeoffs where you could debate both sides and a small change in the set of underlying assumptions could tip the balance one way or another. Or, sometimes, a set of recommendations is mutually interdependent - change one, and you see the arguments for the others fall away.
Here's the style manual maintained by some members of the ISO C++ committee. If you blindly follow it, you won't go too far wrong. I probably agree with ~95% of it, though I say that without having read the whole thing.
- Likes 2
Leave a comment:
-
Originally posted by kpedersen View Post
Yeah I do get that and a lot of it is personal preference. However I do like also being able to scan the top of the function and see quite quickly what needs to be free'd. (This is only in C mind. In C++, this is less of an issue).
Leave a comment:
-
Originally posted by kpedersen View PostIn this case I suppose I could return errno from the failed fopen but often I simply want to abstract the error return codes with a different scheme. Probably beyond the scope of these examples.
Originally posted by kpedersen View PostI don't quite get this bit (I do get a warning from the compiler suggesting that something is off). If you effectively do this you run into problems:
Code:goto cleanup; FILE *myfile = NULL; cleanup: /* myfile will not be NULL, it will be pointing to random memory */ if(myfile) fclose(myfile); /* This will try to fclose garbage memory */
Code:int status = FAILURE; FILE *yourfile = fopen( "yourfile.txt", "rb+" ); if (!yourfile) goto cleanup1; FILE *myfile = fopen( "myfile.txt", "rb+" ); if (!myfile) goto cleanup0; // do some stuff with yourfile and myfile. status = SUCCESS; cleanup0: fclose( myfile ); cleanup1: fclose( yourfile ); return status;
Originally posted by kpedersen View PostEarly exit is ideal and I tend to avoid error codes in my C++ sections. I would throw an exception at this point i.e if a file failed to open. The "free" memory management with RAII is great but I do also want to notify the calling code of failure. Exceptions are very useful for actual errors.
Last time I benchmarked it (admittedly, more than a decade!), exception handling was surprisingly slow. Something on the order of 100k/sec, IIRC. More surprisingly, I found that the stack unwinding appeared to acquire some global mutex, because performance when multiple threads were throwing exceptions was much lower.
Originally posted by kpedersen View PostI appreciate the discussion
: )
C11 does take some of the sting out of it, though.
.Last edited by coder; 07 March 2022, 11:32 PM.
Leave a comment:
-
Originally posted by coder View PostYou really ought to return errno.
Originally posted by coder View PostShould only be garbage after the scope is exited. As long as you're still within the scope where it was declared, then it shouldn't matter where in that scope it's declared.
Code:goto cleanup; FILE *myfile = NULL; cleanup: /* myfile will not be NULL, it will be pointing to random memory */ if(myfile) fclose(myfile); /* This will try to fclose garbage memory */
Originally posted by coder View PostRAII is useful even without exceptions, because if you hit some error, you can just return it right there.
Originally posted by coder View PostHowever, there are rare cases where it's justified to have some local try/catch, which could potentially mirror your goto cleanup; approach. However, if I see some code where basically every function has at least one try/catch block, I know someone isn't using exceptions very effectively.
Anyway, not really sure what any of my points are anymore. I appreciate the discussion
- Likes 1
Leave a comment:
-
Originally posted by kpedersen View PostCode:FILE *fileA = NULL; FILE *fileB = NULL; int rc = 0; fileA = fopen("somepath", "r"); if(!fileA) { rc = 1; goto cleanup; } fileB = fopen("somepath2", "r"); if(!fileB) { rc = 1; goto cleanup; } cleanup: if(fileA) fclose(fileA); if(fileB) fclose(fileB); return rc;
Originally posted by kpedersen View Postif the variable isn't declared and set to NULL at the top, it could be garbage memory by the time the cleanup happens so the if(fileA) check could return true and you get a use-before-assignment error.
I take your point about having the init section match the cleanup, though.
Originally posted by kpedersen View PostAgain this isn't really possible in C++ (because variable declaration executes code in some cases); but then again, this goto approach is really never valid in C++.
Originally posted by kpedersen View PostRAII and exceptions solve the problem (in a more elegant way).
However, there are rare cases where it's justified to have some local try/catch, which could potentially mirror your goto cleanup; approach. However, if I see some code where basically every function has at least one try/catch block, I know someone isn't using exceptions very effectively.
Leave a comment:
-
Originally posted by coder View PostWhen I look at that, I see something needlessly verbose. An easy way to improve the information density of your program is to use C99-style:
Now for a reason that isn't to everyone's taste (though is used quite a lot in Linux / BSD drivers):
Code:FILE *fileA = NULL; FILE *fileB = NULL; int rc = 0; fileA = fopen("somepath", "r"); if(!fileA) { rc = 1; goto cleanup; } fileB = fopen("somepath2", "r"); if(!fileB) { rc = 1; goto cleanup; } cleanup: if(fileA) fclose(fileA); if(fileB) fclose(fileB); return rc;
Again this isn't really possible in C++ (because variable declaration executes code in some cases); but then again, this goto approach is really never valid in C++. RAII and exceptions solve the problem (in a more elegant way).Last edited by kpedersen; 06 March 2022, 08:27 AM.
- Likes 3
Leave a comment:
-
Originally posted by kpedersen View PostWhereas in C, I would tend to do the following:
Code:FILE *fileA = NULL; FILE *fileB = NULL; fileA = fopen("somepath", "r"); if(!fileA) ... fileB = fopen("somepath2", "r"); if(!fileB) ...
Code:FILE *fileA = fopen("somepath", "r"); if(!fileA) ... FILE *fileB = fopen("somepath2", "r"); if(!fileB) ...
Leave a comment:
-
Originally posted by coder View PostHuh? If you're in the scope where they're defined, then they're typically going to get executed anyhow. Delaying execution doesn't really have intrinsic benefits.
Code:std::ifstream fileA; std::ifstream fileB; fileA.open("somepath"); if(fileA.is_open()) ... fileB.open("somepath2"); if(fileB.is_open()) ...
Whereas in C, I would tend to do the following:
Code:FILE *fileA = NULL; FILE *fileB = NULL; fileA = fopen("somepath", "r"); if(!fileA) ... fileB = fopen("somepath2", "r"); if(!fileB) ...
Leave a comment:
-
Originally posted by kpedersen View PostI suppose you don't really get a choice with C++, you tend to not want to execute the constructor code of non-PODs until absolutely needed.
Now, I guess where you have a point is if the constructor of one object has a data-dependency that can only be satisfied by the results of some other bit of code. So, that can push you to define something later in the scope, whether you want to or not.
Originally posted by kpedersen View PostEven order of variables matter because declaring them in many cases executes code (i.e Constructors) and some babysitting of RAII again needs the order.
Originally posted by kpedersen View PostIn C having all variables at the top makes it easy to audit for i.e memory leaks,
For auditing, the benefit of defining variables at the point of use is that you end up with fewer overall lines of code. And that helps auditing, because you can simply fit more on the screen, without the code seeming crowded.
Anyway, the best approach to combating leaks is some combination of static analysis and running runtime leak-checkers on automated tests.
Leave a comment:
Leave a comment: