Announcement

Collapse
No announcement yet.

Zapcc Caching C++ Compiler Open-Sourced

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ypnos
    replied
    Originally posted by caligula View Post
    Um, shouldn't a person in your position care about compile times
    Yes. I don't know what this guy is doing, but when you develop C++ software you will compile a lot while working on the code and this is where zapcc excels -- incremental changes lead to incremental builds. Especially as template programming is becoming more and more prevalent in a C++ codebase and templates are always in headers and always have to be compiled. A precompiled header does jack shit for you when you are working on your templated code.

    Compilation times during development _are_ of utter importance. Because it is the situation where, more often than not, you actively wait for the compilation to finish. Even if it's only 10-30 seconds, or exactly when it is only 10-30 seconds.

    This is not the compiler you use to ship your binary (for this I would also prefer GCC over Clang). But guess how often the compiler has done its work before a product is shipped.

    Leave a comment:


  • caligula
    replied
    Originally posted by dimko View Post
    As someone who compiles on daily basis - I don't care about compiling times. I care about stability of executable and speed of given executable. So.... Pf...
    Um, shouldn't a person in your position care about compile times, not the one that compiles only one application in a year? The changes here mostly affect the frontend, not code generation.

    Leave a comment:


  • cb88
    replied
    Originally posted by coder View Post
    While I do care about compile times, they're not as important to me as the integrity of the output. As such, I'd only use this if the output were binary-identical to standard Clang or GCC.

    It'd be great to see someone build this directly into llvm. Then, you could even see benefits with things like dynamic recompilation of GPU shaders.

    I've previously used both the "ice cream compiler" distributed compilation framework and precompiled headers. Not using either, currently.
    Unless you've configured them to do so, or if perhaps the program is very simple etc... GCC and Clang probably don't even produce identical binaries to themselves on recompilation. It has certainly been an issue in the past that GCC binaries themselves are not identical on recompilation at least not identical enough to md5sum them and get the same thing (executable code may be the same but usally the data sections have differences like build dates etc.... that change them at the very least).
    Last edited by cb88; 18 June 2018, 12:54 AM.

    Leave a comment:


  • computerquip
    replied
    Originally posted by dimko View Post
    As someone who compiles on daily basis - I don't care about compiling times. I care about stability of executable and speed of given executable. So.... Pf...
    You'll care when compile times take a few hours to get done.

    Leave a comment:


  • coder
    replied
    Originally posted by dimko View Post
    As someone who compiles on daily basis - I don't care about compiling times. I care about stability of executable and speed of given executable. So.... Pf...
    While I do care about compile times, they're not as important to me as the integrity of the output. As such, I'd only use this if the output were binary-identical to standard Clang or GCC.

    It'd be great to see someone build this directly into llvm. Then, you could even see benefits with things like dynamic recompilation of GPU shaders.

    I've previously used both the "ice cream compiler" distributed compilation framework and precompiled headers. Not using either, currently.

    Leave a comment:


  • Turbine
    replied
    There's a lot of people shooting in the dark about how it woks, also - our code base compiles from scratch faster with zapcc. (Flags, gcc vs clang and code bases may vary)

    One of the major bottleneck this software solves is the client/server architectre of this software. It keeps a single onstance of the compile server open - caching everything and removing the minimum compile time bottleneck. Say your program takes 3 seconds to recompile any change, this has been able to bring it down to a fraction of a second.

    Leave a comment:


  • discordian
    replied
    Originally posted by StefanBruens View Post

    ccache works on the output of the preprocessor. If you change a globally included header, only the files which are affected by the change are recompiled. So ccache reduces compilation time even when e.g. make/ninja/... trigger a recompilation of the whole project.
    If 5 files depend on the same (changed) header, then all 5 compilation steps will read the file from disk and start parsing it from scratch. ZapCC is more like a server serving compilation clients, the header can be loaded and parsed *once* instead of 5 times.

    Wonder whether their changes could be integrated into clangd easily.

    Leave a comment:


  • StefanBruens
    replied
    Originally posted by AsuMagic View Post
    AFAIK ccache only helps not recompiling files that don't need to be, which is what most build systems will avoid anyway. Zapcc does much more than that, even more than if you were using precompiled headers.
    Get real: When you work on a C++ project, especially using a legacy codebase, you won't only be working with .cpp files that requires only one translation unit to be recompiled. On those projects, change a single header used by half the project and you're in for one minute of compilation on a good quad core system, assuming that it's a small project.
    Plus, zapcc makes fresh compiling projects much faster. When e.g. you're using llvm-svn/mesa-git compile times matter.

    Last time I tried though I found out that LTO was not working sadly. That was for a 32-bit program, though, maybe it has been fixed anyway.
    ccache works on the output of the preprocessor. If you change a globally included header, only the files which are affected by the change are recompiled. So ccache reduces compilation time even when e.g. make/ninja/... trigger a recompilation of the whole project.

    Leave a comment:


  • Weasel
    replied
    Originally posted by AsuMagic View Post
    Get real: When you work on a C++ project, especially using a legacy codebase, you won't only be working with .cpp files that requires only one translation unit to be recompiled. On those projects, change a single header used by half the project and you're in for one minute of compilation on a good quad core system, assuming that it's a small project.
    https://xkcd.com/303/

    Leave a comment:


  • AsuMagic
    replied
    AFAIK ccache only helps not recompiling files that don't need to be, which is what most build systems will avoid anyway. Zapcc does much more than that, even more than if you were using precompiled headers.
    Get real: When you work on a C++ project, especially using a legacy codebase, you won't only be working with .cpp files that requires only one translation unit to be recompiled. On those projects, change a single header used by half the project and you're in for one minute of compilation on a good quad core system, assuming that it's a small project.
    Plus, zapcc makes fresh compiling projects much faster. When e.g. you're using llvm-svn/mesa-git compile times matter.

    Last time I tried though I found out that LTO was not working sadly. That was for a 32-bit program, though, maybe it has been fixed anyway.

    Leave a comment:

Working...
X