Announcement

Collapse
No announcement yet.

Linux Kernel Moving Ahead With Going From C89 To C11 Code

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • cyring
    replied
    Originally posted by phoronix View Post
    Phoronix: Linux Kernel Moving Ahead With Going From C89 To C11 Code

    It looks like for the Linux 5.18 kernel cycle coming up it could begin allowing modern C11 code to be accepted rather than the current Linux kernel codebase being limited to the C89 standard...

    https://www.phoronix.com/scan.php?pa...nel-C89-To-C11
    Moving forward is good thing but I fear they will have tremendous non-regression tests !

    Leave a comment:


  • coder
    replied
    Originally posted by kylew77 View Post
    In C/C++ programming classes in university I was always taught to declare all variables at the top of the function.
    Well, now that you're a free-thinking adult, you can decide the tradeoffs for yourself.
    : )

    That said, most big open source projects & companies have a coding style they want you to use. So, you'd probably end up having to do it the other way and you can see how you like that.

    I take any style manual with a grain of salt. I usually agree with most of it, but I also enjoy the chance to have a spirited debate (even if only in my own mind) on areas where I disagree. It turns out that a lot of style points aren't absolutes, but rather tradeoffs where you could debate both sides and a small change in the set of underlying assumptions could tip the balance one way or another. Or, sometimes, a set of recommendations is mutually interdependent - change one, and you see the arguments for the others fall away.

    Here's the style manual maintained by some members of the ISO C++ committee. If you blindly follow it, you won't go too far wrong. I probably agree with ~95% of it, though I say that without having read the whole thing.

    Leave a comment:


  • kylew77
    replied
    Originally posted by kpedersen View Post

    Yeah I do get that and a lot of it is personal preference. However I do like also being able to scan the top of the function and see quite quickly what needs to be free'd. (This is only in C mind. In C++, this is less of an issue).
    In C/C++ programming classes in university I was always taught to declare all variables at the top of the function.

    Leave a comment:


  • coder
    replied
    Originally posted by kpedersen View Post
    In this case I suppose I could return errno from the failed fopen but often I simply want to abstract the error return codes with a different scheme. Probably beyond the scope of these examples.
    Yeah, I get that. Some libraries want to use their own error codes. However, if you don't really have a good reason not to, then I'd probably stick with errno. Maybe also define some high-numbered values for errors specific to your library.

    Originally posted by kpedersen View Post
    I don't quite get this bit (I do get a warning from the compiler suggesting that something is off). If you effectively do this you run into problems:

    Code:
    goto cleanup;
    FILE *myfile = NULL;
    cleanup:
    /* myfile will not be NULL, it will be pointing to random memory */
    if(myfile) fclose(myfile); /* This will try to fclose garbage memory */
    The issue being that you JMP to the cleanup label without initializing myfile. Meaning that your later cleanup logic is working on unassigned pointers. It is much cleaner in my opinion to just NULL-assign them all at the top before any goto.
    Oh, for sure. Yeah, in cases like that, what I've seen people do is:

    Code:
    int status = FAILURE;
    FILE *yourfile = fopen( "yourfile.txt", "rb+" );
    if (!yourfile) goto cleanup1;
    
    FILE *myfile = fopen( "myfile.txt", "rb+" );
    if (!myfile) goto cleanup0;
    
    // do some stuff with yourfile and myfile.
    status = SUCCESS;
    
    cleanup0:
    fclose( myfile );
    
    cleanup1:
    fclose( yourfile );
    
    return status;
    Originally posted by kpedersen View Post
    Early exit is ideal and I tend to avoid error codes in my C++ sections. I would throw an exception at this point i.e if a file failed to open. The "free" memory management with RAII is great but I do also want to notify the calling code of failure. Exceptions are very useful for actual errors.
    Yeah, I agree. However, there are some functions which are likely to fail, and that's where exceptions are best avoided. Things like DB queries that might have zero or one result. Maybe if you get a result, you need to do some additional processing, but it's nice that you can have an early return, if not.

    Last time I benchmarked it (admittedly, more than a decade!), exception handling was surprisingly slow. Something on the order of 100k/sec, IIRC. More surprisingly, I found that the stack unwinding appeared to acquire some global mutex, because performance when multiple threads were throwing exceptions was much lower.

    Originally posted by kpedersen View Post
    I appreciate the discussion
    Absolutely. I sure don't mind occasionally visiting the land of plain C, but I probably wouldn't want to live there!
    : )

    C11 does take some of the sting out of it, though.

    .
    Last edited by coder; 07 March 2022, 11:32 PM.

    Leave a comment:


  • kpedersen
    replied
    Originally posted by coder View Post
    You really ought to return errno.
    In this case I suppose I could return errno from the failed fopen but often I simply want to abstract the error return codes with a different scheme. Probably beyond the scope of these examples.

    Originally posted by coder View Post
    Should only be garbage after the scope is exited. As long as you're still within the scope where it was declared, then it shouldn't matter where in that scope it's declared.
    I don't quite get this bit (I do get a warning from the compiler suggesting that something is off). If you effectively do this you run into problems:

    Code:
      goto cleanup;
      FILE *myfile = NULL;
    cleanup:
      /* myfile will not be NULL, it will be pointing to random memory */
      if(myfile) fclose(myfile); /* This will try to fclose garbage memory */
    The issue being that you JMP to the cleanup label without initializing myfile. Meaning that your later cleanup logic is working on unassigned pointers. It is much cleaner in my opinion to just NULL-assign them all at the top before any goto.

    Originally posted by coder View Post
    RAII is useful even without exceptions, because if you hit some error, you can just return it right there.
    Early exit is ideal and I tend to avoid error codes in my C++ sections. I would throw an exception at this point i.e if a file failed to open. The "free" memory management with RAII is great but I do also want to notify the calling code of failure. Exceptions are very useful for actual errors.

    Originally posted by coder View Post
    However, there are rare cases where it's justified to have some local try/catch, which could potentially mirror your goto cleanup; approach. However, if I see some code where basically every function has at least one try/catch block, I know someone isn't using exceptions very effectively.
    Absolutely. The goto stuff in C++ can be handled more effectively by early exit or throwing an exception and nothing more. Nothing in that function warrants trying to actually catch and handle the exception. Let it propagate, stack unwind and deal with it potentially elsewhere is the only real option.

    Anyway, not really sure what any of my points are anymore. I appreciate the discussion

    Leave a comment:


  • coder
    replied
    Originally posted by kpedersen View Post
    Code:
    FILE *fileA = NULL;
    FILE *fileB = NULL;
    int rc = 0;
    
    fileA = fopen("somepath", "r");
    if(!fileA) { rc = 1; goto cleanup; }
    
    fileB = fopen("somepath2", "r");
    if(!fileB) { rc = 1; goto cleanup; }
    
    cleanup:
    if(fileA) fclose(fileA);
    if(fileB) fclose(fileB);
    
    return rc;
    You really ought to return errno. Or, I see some people return -errno, leaving non-negative values available to communicate other things. BTW, errno is now a thread-local, so there's no race condition using it in multi-threaded programs. You just have to make sure and read it before calling any other functions that set it, in that thread.

    Originally posted by kpedersen View Post
    if the variable isn't declared and set to NULL at the top, it could be garbage memory by the time the cleanup happens so the if(fileA) check could return true and you get a use-before-assignment error.
    Should only be garbage after the scope is exited. As long as you're still within the scope where it was declared, then it shouldn't matter where in that scope it's declared.

    I take your point about having the init section match the cleanup, though.

    Originally posted by kpedersen View Post
    Again this isn't really possible in C++ (because variable declaration executes code in some cases); but then again, this goto approach is really never valid in C++.
    In C++, I'd rather say it isn't always possible.

    Originally posted by kpedersen View Post
    RAII and exceptions solve the problem (in a more elegant way).
    RAII is useful even without exceptions, because if you hit some error, you can just return it right there. Your objects' destructors will get invoked for you. It means having minimal overhead for error handling, letting you focus on the typical execution path.

    However, there are rare cases where it's justified to have some local try/catch, which could potentially mirror your goto cleanup; approach. However, if I see some code where basically every function has at least one try/catch block, I know someone isn't using exceptions very effectively.

    Leave a comment:


  • kpedersen
    replied
    Originally posted by coder View Post
    When I look at that, I see something needlessly verbose. An easy way to improve the information density of your program is to use C99-style:
    Yeah I do get that and a lot of it is personal preference. However I do like also being able to scan the top of the function and see quite quickly what needs to be free'd. (This is only in C mind. In C++, this is less of an issue).

    Now for a reason that isn't to everyone's taste (though is used quite a lot in Linux / BSD drivers):

    Code:
      FILE *fileA = NULL;
      FILE *fileB = NULL;
      int rc = 0;
    
      fileA = fopen("somepath", "r");
      if(!fileA) { rc = 1; goto cleanup; }
    
      fileB = fopen("somepath2", "r");
      if(!fileB) { rc = 1; goto cleanup; }
    
    cleanup:
      if(fileA) fclose(fileA);
      if(fileB) fclose(fileB);
    
      return rc;
    If you have a lot of allocations (including resources as shown above) in a function, you end up needing to duplicate cleanup code each time you early exit. This goto approach has some merit but importantly, if the variable isn't declared and set to NULL at the top, it could be garbage memory by the time the cleanup happens so the if(fileA) check could return true and you get a use-before-assignment error.

    Again this isn't really possible in C++ (because variable declaration executes code in some cases); but then again, this goto approach is really never valid in C++. RAII and exceptions solve the problem (in a more elegant way).
    Last edited by kpedersen; 06 March 2022, 08:27 AM.

    Leave a comment:


  • coder
    replied
    Originally posted by kpedersen View Post
    Whereas in C, I would tend to do the following:

    Code:
    FILE *fileA = NULL;
    FILE *fileB = NULL;
    
    fileA = fopen("somepath", "r");
    if(!fileA) ...
    
    fileB = fopen("somepath2", "r");
    if(!fileB) ...
    Declaring the pointers at the top.
    When I look at that, I see something needlessly verbose. An easy way to improve the information density of your program is to use C99-style:

    Code:
    FILE *fileA = fopen("somepath", "r");
    if(!fileA) ...
    
    FILE *fileB = fopen("somepath2", "r");
    if(!fileB) ...

    Leave a comment:


  • kpedersen
    replied
    Originally posted by coder View Post
    Huh? If you're in the scope where they're defined, then they're typically going to get executed anyhow. Delaying execution doesn't really have intrinsic benefits.
    Something like this would be a bit weird:

    Code:
    std::ifstream fileA;
    std::ifstream fileB;
    
    fileA.open("somepath");
    if(fileA.is_open()) ...
    
    fileB.open("somepath2");
    if(fileB.is_open()) ...
    You would generally create the ifstream variables as you open them (passing the path into the constructor). Otherwise it feels like a slightly janky two stage construction.

    Whereas in C, I would tend to do the following:

    Code:
    FILE *fileA = NULL;
    FILE *fileB = NULL;
    
    fileA = fopen("somepath", "r");
    if(!fileA) ...
    
    fileB = fopen("somepath2", "r");
    if(!fileB) ...
    Declaring the pointers at the top.

    Leave a comment:


  • coder
    replied
    Originally posted by kpedersen View Post
    I suppose you don't really get a choice with C++, you tend to not want to execute the constructor code of non-PODs until absolutely needed.
    Huh? If you're in the scope where they're defined, then they're typically going to get executed anyhow. Delaying execution doesn't really have intrinsic benefits.

    Now, I guess where you have a point is if the constructor of one object has a data-dependency that can only be satisfied by the results of some other bit of code. So, that can push you to define something later in the scope, whether you want to or not.

    Originally posted by kpedersen View Post
    Even order of variables matter because declaring them in many cases executes code (i.e Constructors) and some babysitting of RAII again needs the order.
    I think I know what you're saying, but I think the relevant point is that destructors will execute in the opposite order as that of the variables' definitions. So, if there are any order-dependencies on cleanup, such as one object referring to another, then you need to be mindful of that.

    Originally posted by kpedersen View Post
    In C having all variables at the top makes it easy to audit for i.e memory leaks,
    No, I don't follow. Especially since pointer variables don't always own the memory they point to. What you really care about is matching up allocations with deallocations, and C doesn't tie either of those to where the variable is defined. So, IMO, it's pretty much unrelated.

    For auditing, the benefit of defining variables at the point of use is that you end up with fewer overall lines of code. And that helps auditing, because you can simply fit more on the screen, without the code seeming crowded.

    Anyway, the best approach to combating leaks is some combination of static analysis and running runtime leak-checkers on automated tests.

    Leave a comment:

Working...
X