Announcement

Collapse
No announcement yet.

Fedora 29 Dropping GCC From Their Default Build Root Has Been Causing A Heated Debate

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Candy View Post
    If things would be that easy. Years ago people avoided using anything else than gcc 2.95.4 (that situation was true for other operating systems as well) because they feared that the code generated may be faulty code.

    So yes! There is one reason sticking with one compiler. The one that has been proven to be the most stable one and the one that generates the most stable binary.
    But surely even with that taken into account, this change should be a welcome one because now any package that DOES want to keep building with gcc CAN explicitly specify a `BuildRequires: gcc` and be sure of what compiler they're getting?

    I just don't understand how making the compiler dependency explicit is a "retarded" decision, to quote another eloquent commenter up-thread. Surely if there's any chance of the default compiler changing, NOT having the explicit requirement would be far more problematic, right?

    Comment


    • #12
      Originally posted by Candy View Post

      If things would be that easy. Years ago people avoided using anything else than gcc 2.95.4 (that situation was true for other operating systems as well) because they feared that the code generated may be faulty code.
      I actually remember those years.
      So yes! There is one reason sticking with one compiler. The one that has been proven to be the most stable one and the one that generates the most stable binary.
      Actually that is wrong, you stick with one compiler and you end up with the possibility of bugs that take a long time to show up much less debug. Use two high quality compilers and you are far more likely to find problem areas or compiler failures.
      Don't forget that we are not just talking about i386, i686 or x86_64 here. There are other architectures like ARM or PowerPC where correct code generatin needs to be assured.
      I could argue the CLang is a stronger ARM compiler than GCC, but that really isn't the point here. Rather we are in a world now where you have two to three good quality compilers to choose from. I say good because none of them are perfect and frankly this is the important point.
      So flipping around CLang, then GCC then again CLang etc. may end in different - and in worst case - broken results. One wrong opcode, one wrong thing can cause a lot of issues.
      You are far more likely to find problems in a code base by running it through two different compilers. This frankly is one reason people have suggested running code targetted for Linux and GCC through Clang and its tools. Mind you these are suggestions from people far more capable than me. The reality is that GCC and Clang are not perfect, not even close really but the two different development tracks means differing abilities to find problems or even compile code differently.

      Comment


      • #13
        Originally posted by Candy View Post
        If things would be that easy. Years ago people avoided using anything else than gcc 2.95.4 (that situation was true for other operating systems as well) because they feared that the code generated may be faulty code.

        So yes! There is one reason sticking with one compiler. The one that has been proven to be the most stable one and the one that generates the most stable binary.
        But even with that being the case, surely it's BETTER, then, that the compiler selection be made explicit, so that projects that care what compiler they're built with have the option to control that in their spec file. It's as simple as:
        Code:
        BuildRequires: gcc
        Any project that explicitly requires gcc, they can specify gcc, and they'll know exactly what they'll be building with... right?

        That's why I'm just completely not grasping what it is that makes this change such a "retarded" decision, as one eloquent commenter put it up-thread. Surely if there is any chance of the default compiler being changed, then it would be far worse if they made that change implicitly, and failed to provide package maintainers with the option to lock in to whatever compiler they choose. What am I missing?

        Comment


        • #14
          I don't really have an issue with the explicit requirement. It's unfortunate they didn't at least make it optional previously though.
          Originally posted by wizard69 View Post
          I could argue the CLang is a stronger ARM compiler than GCC, but that really isn't the point here. Rather we are in a world now where you have two to three good quality compilers to choose from. I say good because none of them are perfect and frankly this is the important point.
          Hmm, my ARM work is all Cortex-M, and last I checked clang a) generated objectively worse code in plenty of places, and b) support was very immature, maybe it's improved drastically in the last few months?
          Originally posted by wizard69
          In theory good C/C++ code shouldnt be compiler dependent but we all know how that works.
          If you're writing a run-of-the-mill Linux desktop application, then sure, there are plenty of domains where that isn't really true though.

          Comment


          • #15
            Originally posted by brrrrttttt View Post
            Hmm, my ARM work is all Cortex-M, and last I checked clang a) generated objectively worse code in plenty of places, and b) support was very immature, maybe it's improved drastically in the last few months?
            I suspect that he is not objective because Clang is an Apple project, and he is an apple fan

            Comment


            • #16
              Originally posted by AsuMagic View Post
              Why not something like "BuildRequires: cxx"?
              Because the status-quo is that everything depends on GCC – not on some arbitary C/C++ compiler. As such, adding a build dependency on GCC ensures that nothing actually changes as a result... everything that previously got compiled with GCC still does, unless the maintainer deliberately chooses to do otherwise.

              Comment


              • #17
                Originally posted by stibium View Post

                Compiler-dependent code means either the compiler doesn't conform to standards (and honestly, when has GNU ever adhered to standards other than their own?) or the code relies on bugs in the compiler. I applaud this move on Redhat's part. The fewer buggy GNU packages in my distro the better.
                GCC adheres to the standards just fine, it's just that there exists GNU extensions for things that have as of yet not been formalized into a standard but that are useful. There are also things for which there exists no standard (e.g using typedef to create a opaque pointer and then using "const pointer const name") where GCC accepts the code but Clang throws an error (even though a warning might be more appropriate here).

                That said it always good to run your code over with more than one compiler, people should also take care to link with various C runtimes and versions of shared libraries (e.g OpenSSL vs LibreSSL) like in https://www.linkedin.com/pulse/trial...-henrik-holst/

                Comment


                • #18
                  Originally posted by FeRD_NYC View Post
                  I just don't understand how making the compiler dependency explicit is a "retarded" decision, to quote another eloquent commenter up-thread. Surely if there's any chance of the default compiler changing, NOT having the explicit requirement would be far more problematic, right?
                  No, and I will keep it brief (don't want to argue about this again, already did in a prior thread). I'll quote another relevant part:
                  Originally posted by stibium View Post
                  Compiler-dependent code means either the compiler doesn't conform to standards (and honestly, when has GNU ever adhered to standards other than their own?) or the code relies on bugs in the compiler. I applaud this move on Redhat's part. The fewer buggy GNU packages in my distro the better.
                  Indeed.

                  But the Linux userland violates the language specification, especially for C++. It is so dependent on the exact standard library implementation of GCC and other GCC features that it can lead to many problems mixing compilers up.

                  I've already argued with Linux fanboys in another thread where they defend this behavior of violating ODR (which works in C because everything uses glibc, but technically it's illegal even there though, it's just not a practical issue, but it still breaks "standards") when I complained about it (both Windows and Mac OS don't, so it's not like everything does, only Linux userland does). Because they're too lazy to make symmetrically designed APIs (i.e. an API that allocates an object must also provide a means of freeing/destroying the object) and other stuff like this. They also pass std library types like hot cakes in library APIs when they are different between GCC and Clang's std libraries so ODR is violated and crashes likely occur.

                  Note that it's not really the kernel's fault whatsoever, so it's not Linux per se, just the userland libraries. Linux (the kernel) is still solid as ever.

                  Comment


                  • #19
                    Originally posted by Weasel View Post
                    No, and I will keep it brief (don't want to argue about this again, already did in a prior thread). I'll quote another relevant part:Indeed.

                    But the Linux userland violates the language specification, especially for C++. It is so dependent on the exact standard library implementation of GCC and other GCC features that it can lead to many problems mixing compilers up.

                    I've already argued with Linux fanboys in another thread where they defend this behavior of violating ODR (which works in C because everything uses glibc, but technically it's illegal even there though, it's just not a practical issue, but it still breaks "standards") when I complained about it (both Windows and Mac OS don't, so it's not like everything does, only Linux userland does). Because they're too lazy to make symmetrically designed APIs (i.e. an API that allocates an object must also provide a means of freeing/destroying the object) and other stuff like this. They also pass std library types like hot cakes in library APIs when they are different between GCC and Clang's std libraries so ODR is violated and crashes likely occur.

                    Note that it's not really the kernel's fault whatsoever, so it's not Linux per se, just the userland libraries. Linux (the kernel) is still solid as ever.
                    Where are those libraries of yours that creates objects without providing a means of destroying them? The discussion that you are referring to was about libraries allocating standard C types like strings for which you need no other function than the standard C free() unless you are on Windows of course.

                    And what changes between GCC and Clang are you seeing with the std libraries (especially since both are using the same std library)? Unless you speak of C++ objects here but then I don't see any libs sharing C++ objects so...

                    Comment


                    • #20
                      Originally posted by F.Ultra View Post
                      Where are those libraries of yours that creates objects without providing a means of destroying them? The discussion that you are referring to was about libraries allocating standard C types like strings for which you need no other function than the standard C free() unless you are on Windows of course.

                      And what changes between GCC and Clang are you seeing with the std libraries (especially since both are using the same std library)? Unless you speak of C++ objects here but then I don't see any libs sharing C++ objects so...
                      I don't know what libraries, it wasn't me who said that libraries should do that. I've only ever used sane libraries with symmetrical API designs, so I can't say (again, it wasn't me who said they use that, but people defending them)

                      Strings are not a "C type", they're just a region of bytes as implied by const char*. That is, they're not a specific special type like C++ std::string or whatever. It's not the type itself that's the problem in this case (basic types are perfectly fine to use), it's the allocation. You simply don't allocate stuff in a library and expect something else to free it without going through your library.

                      malloc is a function. A function with a specific data structure (the structs used on the heap) and other stuff like that. Why on earth would people think that it's a good idea to use malloc from a library (using a specific function and its specific data structures) and then let *someone else* free it, when there's NO GUARANTEE by the language that it's the same implementation. In fact the language doesn't even define an implementation of malloc, at all.

                      In most cases, you don't even control what standard library you link to with a compiler, so you cannot ENFORCE the fact that it's the exact same function. Thus, it is a violation of the language rules (ODR).

                      "Oh but on Linux everyone compiled with GCC and it's a de facto standard and blabla" -- well that's... kind of the point... I was making, and why this decision is dumb until Linux userland becomes sane.


                      tl;dr Clang and GCC just happen to use the same C library, but not C++. "Just happen" doesn't mean it's respecting the language, it's not any different than relying on "specific compilers".
                      Last edited by Weasel; 12 July 2018, 09:36 AM.

                      Comment

                      Working...
                      X