Announcement

Collapse
No announcement yet.

C++ Doesn't Change The Speed Of GCC

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by schmalzler View Post
    Yes, it's fine, but C++ takes away a lot of work that may introduce errors if not done properly.
    This is probably reason why programs written in languages like Java, C#, C++, Python, ... are so much more stable than programs written in C.


    Originally posted by schmalzler View Post
    Then take a better example: (everyday problem when using a C lib within C++):
    Code:
    void fun(char* c);
    
    void use_fun() {
      char const* str = "Hello const";
      fun(str);
    }
    Is it good that this compiles fine with C (at best you get a warning)? What if fun() modifies the string?
    I think warning is enough (you can also use -Werror). Programers shouldn't ignore warnings.

    Comment


    • #22
      Doesn't C++ add considerably run-time overhead?

      Comment


      • #23
        Originally posted by Sergio View Post
        Doesn't C++ add considerably run-time overhead?
        Depends on features used.

        Comment


        • #24
          We're in an era when we have much more computing power in a very few watts (long gone 120+ W for 4 cores) and I think that the biggest challenge for software is still to make programs multi-core aware. If GCC would be able to use internally multi-core (is much harder as multi-core programming is hard, and also compilers tend to be fairly serial in nature), I expect that there would bigger improvment overall in compilation time. I found many times some big generated wrapper code (for example using CORBA or a big ORM layer) that when you change it, you still wait and wait, because everything is it in a big file. I fully agree that "make -jX" is good enough, but I'm in a way surprised on the myths that "C works faster because is raw metal". In a similar note, as OpenCL is not well supported by opensource stacks so well, even it is unlikely they can be used inside compilers, it would be great to annotate functions and GCC to rewrite them in OpenCL friendly way.

          LLVM also proved in my opinion that C++ can be faster by design by simply using a smart enough registry allocator (their greedy allocator) that gives 95% of the full colorizer that GCC uses and they can spend the CPU cycles by performing optimizations. Sadly, I don't think that in 2 years from now, GCC would ever offer an internal multi-core scanner+parser, but I'm really glad that LTO does use and I will really love to see that more programs will be given LTO a chance, for disk space reasons: it seems that many applications can see winning opportunities. Also, I did not see many profiled applications (I mean with PGO) upstream (of course because it is hard to do it yet, excluding with the notable exception of Firefox).

          But I hope that GCC will continue with the same pace of development as it is now and great job for GNU!

          Comment


          • #25
            Originally posted by Sergio View Post
            Doesn't C++ add considerably run-time overhead?
            Lol at "considerably". Hey, C++ isn't an interpreted language you know.

            It adds no overhead if you use C subset features (that's what the article says, basically), and probably less overhead when you use advanced features than when implementing them in C (which is something C programmers actually do).

            Comment


            • #26
              Originally posted by Sergio View Post
              Doesn't C++ add considerably run-time overhead?
              No, C++ doesn't add any run-time overhead. Though using certain C++ features might, simply using C++ doesn't add anything.

              Comment


              • #27
                Originally posted by Sergio View Post
                Doesn't C++ add considerably run-time overhead?
                By itself, the only thing it adds that might directly impact runtime speed is exception support, but it depends on the compiler and ABI. Zero-overhead exception support is a thing and GCC has supported it (as does the Itanium C++ ABI that most Llinux C++ compilers use, iirc). Using that, there is a cost in binary size (the exception handling tables and the extra unwinding code) and a cost when exceptions are actually thrown, but no runtime overhead in the common case when no exception is thrown. All the overhead goes away when exceptions are just turned off (-fno-exceptions in GCC, iirc). IMO, you should disable exceptions no matter what project you're on; the C++ committee even goofs and releases implicitly exception-unsafe modules, exceptions don't belong in an unmanaged language. But then I may be biased as a game dev; some of our target platforms don't support exceptions anyway, so we have to write everything assuming we have no choice and exceptions must be disabled.

                RTTI can also in theory increase overhead by increasing the binary size, though it should other have zero runtime impact. Again, RTTI can be turned off (-fno-rtti I think). Not a bad idea, as a custom RTTI solution can be written without much work that only adds data to classes that need it and which have way more useful data than the standard RTTI supports, if you need it.

                I believe GCC is disabling both of those features during build.

                Poorly constructed benchmarks also show some overhead, because the C++ IOStreams library adds overhead both at startup and during runtime (if used) compared to printf (in many implementations, it's just a wrapper over printf). IOStreams is a nasty library in most cases. Unsure if GCC is doing anything to avoid linking with it. It's in theory possible to get a subset of C++ runtime library support (most of it is templates anyway, but there are bits that must live in the runtime library), but I don't know if GCC is jumping through any of those hoops or not.

                Otherwise, C++ is a superset of C. i++ is the exact same in C and C++. Calling a method (without old-style exception handling) is the exact same.

                If anything, C++ can produce superior code. Consider a templated quicksort (like std::sort, inlined all the way including the comparison operator) vs C's quicksort (which must invoke a function pointer for each item, no inlining possible). The same goes for containers. C versions often do everything via void* and a bunch of runtime-specified functions while C++ can generate the exact code for any particular element type.

                Things like C++ virtual functions have overhead compared to non-virtual functions, but then you aren't forced to use them where you don't need them. When you do need them, in C you'd be using a function pointer (maybe stored in a static struct of pointers, just like a C++ vtable). In cases where it's more efficient to eschew the vtable and store the pointer right in the struct, you can do the exact same thing in C++.

                Some STL classes can have much worse overhead than a hand-written C equivalent. Not because they're poorly written, but because they have design constraints you might not care about. For instance, I always use a custom hash table instead of std::unordered_map, because the latter is specified such that inserting or removing an element should not invalidate any reference to a different element. It is therefor required to use a chained link list or tree for each buckit. This is done because it is assumed you might be storing large or expensive-to-copy elements in the hash table. In practice, every element I store is small and cheap to copy (or move), so I can use an open-addressing implementation that has orders of magnitude better performance.

                C++ of course fully supports doing this. My custom containers are faster, fully support C++ algorithms, add support for ranges and a few other useful things I need, make use of a custom game-friendly allocator API, and so on.

                In short, there is no _good_ C++ programmer that is not also a good C programmer, because C++ is (almost) a pure superset of C, and anything you can do in C can be done in C++ with equivalent performance (barring bad exception implementations).

                It's somewhat like some Java benchmarks I've seen. Looking at the compiled x86 assembly of a C compiler and an AOT-compilation of the equivalent Java code, the Java code was much larger because of all the initialization and shutdown code. The actual math benchmark in the middle was literally identical to what the C compiler output. If the benchmark not only tests the runtime speed, but also starts up a whole process (and hence is also testing the runtime startup, linker speed, disk cache I/O speed, and so on) then languages other than C can look pretty slow. In reality, there's nothing special about C for most code. And even where C is faster, your runtime is generally dominated by algorithm efficiency and use of concurrency, so any difference is generally negligable (which is why C#, Python, and JavaScript are so popular; developers outside of certain special niche cases don't give a crap about the runtime speed differences).

                Comment

                Working...
                X