Originally posted by coder
View Post
I want the programmer to use the correct language constructs for the task in hand. Then you do not need backstops - they add nothing but confusion and totally untestable code. You cannot rely on garbage collection handling the release of critical resources at an appropriate time. Therefore, you must use something else - something that you can rely on. And once you have that, garbage collection is no longer relevant.
You're failing to distinguish between what the language actually implements vs. the prescribed best practices. If we take the example of a Python with statement, that simply builds atop the object's existing semantics. It closes the file or releases the lock because that's what the object's destructor does. So, you're wrong to say that Python doesn't use garbage collection for those things, but what it does is provide a better alternative mechanism.
It is possible that you are mixing up deterministic destructors and the term "garbage collection". That would explain a lot of our disagreement here.
Garbage collection is asynchronous and non-deterministic, and used to clear up memory from old objects once there are no longer any live references to them. There are various garbage collection algorithms (with different advantages and disadvantages), but they typically involve running through the working memory looking for object references. By working in the background (often in their own thread), they save the main working thread(s) a little effort. But they don't always catch everything, and may need to lock memory areas.
Destructors in a language like C++ run at precisely specified points in the code - they run when the object's lifetime ends. For local objects, that matches the end of their scope. It is absolutely fine to use such synchronous destructors for releasing resources - it is the preferred method, since it is automatic and you are guaranteed that the destructor will run regardless of exceptions. Python does not have such synchronous cleanup for its normal "__del__" destructor methods. But it does have them in the "__exit__" methods of objects that can be used in "with" statements. Thus in Python, you use "with" statements for resources, and do not rely on garbage collection. In C++, where destructors are synchronous, you use RAII ("Resource Acquisition Is Initialisation") for all resources, including memory. (There are techniques available if you want asynchronous resource release.)
The question you need to ask is: what if you didn't catch it? For an arbitrary mutex, would it be better to have a dangling-lock bug that you might not even hit in testing, before the software is in the hands of customers? Or would you rather the lock get freed eventually? I know some people prefer the more catastrophic failure, but that presumes very good test coverage, which often isn't the case.
No, the way to handle locks and critical resources is to have clearly defined methods of handling them. If you are programming in Python, you know all lock acquisition must take place within a "with" statement. If you are using Java, use a "try/finally" block. For C++, use RAII. Use the standard, clear and safe methods available for the language in question. Write the code in a clear manner - such as keeping the containing block small so that it is obvious it is correct.
While debugging, you might make use of hooks in your garbage collection that check for lost locks - but they should not release the lock. They should yell at the developer, with any information they can give to help find the bug.
What about file descriptors, where the program leaks more and more fds, the longer it runs, until operations utilizing file descriptors just randomly start failing? Is that a preferable outcome?
I think you're being too idealistic.
I think you're being too idealistic.
Leave a comment: