Announcement

Collapse
No announcement yet.

Rebuilding Fedora Under GCC 6 Has Uncovered An Assortment Of Problems

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • codensity
    replied
    Originally posted by JeanPaul145 View Post

    If you truly believe that then you're a lot less knowledgeable than you think you are. Even though a compiler can prevent things like resource/memory leaks through static analysis, this does NOT mean that a compiler can be written that generates bug-free code. In fact, while proving a given program is bug-free (though tedious to the point of not being worth it), proving that some random program is bug-free is not possible.

    Also, a language like C is a *LOT* less complex than something like C++, and it's a lot easier to write more portable code in C, to boot. Anyone with actual experience writing cross platform code in C++ will tell you the same. Put bluntly, C++ introduces so much pain (long compilation times, the ABI breaks with every new update of the compiler you're using, ABIs between compilers are not even close to being compatible, language features that are complex to the point of coaxing programmers to do the Wrong Thing, that same language complexity generally also means that each team member speaks a slightly different C++ dialect etc) and takes away so little of the pain endured when working with C (mostly the Makefile build system, lack of a proper package manager for C libs, raw pointer use everywhere) that using C++ is not sanely justifiable for serious development work. Even MS (one of the biggest C++ users) agreed with this, otherwise they wouldn't have bothered developing C# in the first place.
    Perhaps we're speaking about different things. These days you actually can prove the correctness of even pre-existing C programs which has been a focus of much work in separation logic and step indexing (though it will probably be less painful to just start from scratch and avoid C). However I agree that no tool could check a specification which is perhaps your concern. Still I would argue that a machine checked proof of correctly implementing a specification is a huge win. For various reasons it's generally much more difficult to screw up a spec compared to an implementation.

    Modern languages like C# and Java (though I suppose these two aren't too modern anymore) don't actually provide much static protection and while far simpler than C++, are still quite complicated (Java's type system for instance was originally thought to be safe). Additionally while dynamic checks for something like bounds will help security, it doesn't always help squash bugs (i.e. some backtraces are more useful than others). Static bounds checking on the other hand will help uncover bugs since the types act as a trail of evidence leading to the location where the bound became unsafe or we no longer have enough information about the value etc. The problem with backtraces is that I only get information about an error in some extremely specific program state but I might have no idea where the state came from in the first place.

    Unrelated while I like C++ for some tasks, I hope that no one argues for it citing potential safety.

    Leave a comment:


  • JeanPaul145
    replied
    Originally posted by varikonniemi View Post
    You can only blame yourself. The amount of pain people have had to endure because coders insist on using languages that are over their head (C) is infinite. There exists tools and practices to make formally correct code with no bugs. But since you insist on using a relic like C we have to wait for truly functioning programs a while longer. Rust is the first step away from the nightmare but it is only half a step.
    If you truly believe that then you're a lot less knowledgeable than you think you are. Even though a compiler can prevent things like resource/memory leaks through static analysis, this does NOT mean that a compiler can be written that generates bug-free code. In fact, while proving a given program is bug-free (though tedious to the point of not being worth it), proving that some random program is bug-free is not possible.

    Also, a language like C is a *LOT* less complex than something like C++, and it's a lot easier to write more portable code in C, to boot. Anyone with actual experience writing cross platform code in C++ will tell you the same. Put bluntly, C++ introduces so much pain (long compilation times, the ABI breaks with every new update of the compiler you're using, ABIs between compilers are not even close to being compatible, language features that are complex to the point of coaxing programmers to do the Wrong Thing, that same language complexity generally also means that each team member speaks a slightly different C++ dialect etc) and takes away so little of the pain endured when working with C (mostly the Makefile build system, lack of a proper package manager for C libs, raw pointer use everywhere) that using C++ is not sanely justifiable for serious development work. Even MS (one of the biggest C++ users) agreed with this, otherwise they wouldn't have bothered developing C# in the first place.

    Leave a comment:


  • codensity
    replied
    Originally posted by Ansla View Post
    There is no such thing as tools that can guarantee that some piece of code will do what you want just because it compiles. Code that compiles does something even when written in ansi C. The best thing that tools and practices can do is guarantee your code does something else except crash.
    There are actually many tools that can guarantee some piece of code will function as specified (of course if you mis-specify something there's no recovering from simply saying the wrong thing). Probably the two most popular are isabelle and coq. Of course you'll have to make some assumptions about e.g. the correctness of the underlying operating system, your c compiler, libc etc. though there's ongoing work to mitigate some of these risks such as needing to trust gcc.

    Leave a comment:


  • varikonniemi
    replied
    Originally posted by Ansla View Post
    There is no such thing as tools that can guarantee that some piece of code will do what you want just because it compiles. Code that compiles does something even when written in ansi C. The best thing that tools and practices can do is guarantee your code does something else except crash.
    really?

    Leave a comment:


  • Ansla
    replied
    Originally posted by varikonniemi View Post
    There exists tools and practices to make formally correct code with no bugs.
    There is no such thing as tools that can guarantee that some piece of code will do what you want just because it compiles. Code that compiles does something even when written in ansi C. The best thing that tools and practices can do is guarantee your code does something else except crash.

    Leave a comment:


  • chithanh
    replied
    Originally posted by RahulSundaram View Post
    The same compiler which doesn't even support C99 completely? GCC and Clang is way ahead.
    To be fair, they are listening to complaints about lack of C99 support, and have promised to implement in Visual Studio the C99 subset that ffmpeg needs to build.


    Leave a comment:


  • codensity
    replied
    Originally posted by varikonniemi View Post
    You can only blame yourself. The amount of pain people have had to endure because coders insist on using languages that are over their head (C) is infinite. There exists tools and practices to make formally correct code with no bugs. But since you insist on using a relic like C we have to wait for truly functioning programs a while longer. Rust is the first step away from the nightmare but it is only half a step.
    While I take your point I think that's perhaps an oversimplification of today's situation. Verifying large pieces of software is a difficult and time consuming process. We only really have a couple examples of industrial formalization---compcert and sel4 being the most widely known. These were huge projects by world experts and I've always been slightly skeptical of sel4 (if/how they formalized C99) but am too lazy to dig through the isabelle and the publications seem light on detail. Obviously we also have some dependently typed languages gaining traction such as Idris and Agda (difficult to produce executables) but dependent types are even more experimental. Additionally some applications do need low-level control. Here ats seems interesting however I haven't used it enough to comment.

    I do agree that C is problematic because it's impossible to reason about correctness properties, however it's easy to reason about performance (compared to say, haskell) which is a valuable trade-off for many people. Personally I would prefer to use a buggy but fast web browser rather than a slow but correct one (that's a mess of ltac no one can understand). I also agree that rust is a step in the right direction here but I'm not sure I agree a "full step" such as to coq or ats is the right thing to do at this point in time.

    I think moving away from C is taking forever because there's really not been a decent alternative for many applications. Everything was either too high-level or required too much expertise. Perhaps the lack of an alternative is however due to widespread stubbornness---I don't know. I am for instance surprised we still see languages like go popping up.

    Leave a comment:


  • varikonniemi
    replied
    Originally posted by droidhacker View Post
    I think we all know that no matter how well debugged a very complex piece of software is, it remains close to impossible to entirely eliminate ALL bugs.
    You can only blame yourself. The amount of pain people have had to endure because coders insist on using languages that are over their head (C) is infinite. There exists tools and practices to make formally correct code with no bugs. But since you insist on using a relic like C we have to wait for truly functioning programs a while longer. Rust is the first step away from the nightmare but it is only half a step.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by eydee View Post
    In the meantime, the current version of the MS compiler gladly compiles stuff written for Visual C++ 1.0. Or even Borland C++ about 20 years ago...
    No it doesn't. Have you ever tried compiling a legacy windows codebase from 20 years ago? I doubt it.

    I'm sure some stuff will still compile, but definitely not everything. Which is the same thing here.

    Leave a comment:


  • RahulSundaram
    replied
    Originally posted by eydee View Post
    In the meantime, the current version of the MS compiler gladly compiles stuff written for Visual C++ 1.0. Or even Borland C++ about 20 years ago...
    The same compiler which doesn't even support C99 completely? GCC and Clang is way ahead.

    Leave a comment:

Working...
X