Announcement

Collapse
No announcement yet.

The Speed Of LLVM's LLD Linker Continues Looking Good

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by lowflyer View Post

    I agree with you on the general idea. But when I see that a chromium debug build is linked in under 45 seconds I have to question your calculations with the man-hours. man-seconds would be more appropriate.
    It started at 20 minutes. If you have 8Gbyte of memory it takes 4 hours of swapping on a completely thrashed machine to just _link_ Chrome.

    Comment


    • #12
      Originally posted by carewolf View Post

      It started at 20 minutes. If you have 8Gbyte of memory it takes 4 hours of swapping on a completely thrashed machine to just _link_ Chrome.
      I don't know about you, but to me the solution is obvious: un-thrash that machine and give it a decent amount of RAM. 64 GB are not "that" expensive nowadays.

      IMHO your numbers seem to be completely "out-of-reality" compared to the numbers stated in the article. I would seriously consider searching for the flaw in your setup than wait for a "faster linker":
      HTML Code:
      ============  ===========  ======  ========  ======
      Program       Output size  GNU ld  GNU gold  LLD
      ffmpeg dbg    92 MiB     1.59s   1.15s     0.78s
      mysqld dbg    158 MiB      7.09s   2.49s     1.31s
      clang dbg     1.55 GiB     86.76s  21.93s    8.38s
      chromium dbg  1.57 GiB     N/A     40.86s    12.69s
      ============  ===========  ======  ========  ======

      Comment


      • #13
        Originally posted by lowflyer View Post

        I don't know about you, but to me the solution is obvious: un-thrash that machine and give it a decent amount of RAM. 64 GB are not "that" expensive nowadays.

        IMHO your numbers seem to be completely "out-of-reality" compared to the numbers stated in the article. I would seriously consider searching for the flaw in your setup than wait for a "faster linker":
        HTML Code:
        ============ =========== ====== ======== ======
        Program Output size GNU ld GNU gold LLD
        ffmpeg dbg 92 MiB 1.59s 1.15s 0.78s
        mysqld dbg 158 MiB 7.09s 2.49s 1.31s
        clang dbg 1.55 GiB 86.76s 21.93s 8.38s
        chromium dbg 1.57 GiB N/A 40.86s 12.69s
        ============ =========== ====== ======== ======
        I found the flaw, it was 8Gbyte of memory on the automated VM. We upped it to 16Gbyte and that solved it just fine (with 12Gbyte it took 45 minutes, btw).

        Comment


        • #14
          Originally posted by lowflyer View Post

          I agree with you on the general idea. But when I see that a chromium debug build is linked in under 45 seconds I have to question your calculations with the man-hours. man-seconds would be more appropriate.
          3 things your knee-jerk response didn't take into account, and no problem, I forgive you for that:

          1) Not everyone has the hardware budget of Google, especially an average OSS contributor. Have you personally tried to link any large projects? Have you managed any large team projects? Have they complained about link time when you've pressured them on a deadline?

          2) Linking is a single-core job which is best performed by an overclocked i3 or i5, but most IT managers don't know that and would throw money at broadwell-e server hardware to speed up their developer RTT, lol, so even when a budget is available it may not be spent correctly.

          3) You're right, it's man-seconds per link. How many times will you link today? 10? 20? 200? Depends on the task. If you're checking patches and bisecting them for regressions and making small fixes as you go, 200 links a day is easy, if it takes a small number of seconds.

          And these delays can cascade when, inevitably, one developer, or a team, is waiting on another.

          Comment


          • #15
            Originally posted by linuxgeex View Post
            3 things your knee-jerk response didn't take into account, and no problem, I forgive you for that:
            No need to get personal. I'm trying to beat some common sense into the general whining about linker performance.

            Originally posted by linuxgeex View Post
            1) Not everyone has the hardware budget of Google, especially an average OSS contributor. Have you personally tried to link any large projects? Have you managed any large team projects? Have they complained about link time when you've pressured them on a deadline?
            Where do you draw the line between a small and a large project? The article refers to Google Chrome. Is Chrome large enough to bring across your point or do you have another (possibly hypothetical) project in your mind that is impossible with "average OSS" equipment? Have you yourself linked/managed a large project?

            When developers start to complain about linking time when discussing deadlines, then there's a really serious disconnect. But it's not related to the linker.

            Originally posted by linuxgeex View Post
            2) Linking is a single-core job which is best performed by an overclocked i3 or i5, but most IT managers don't know that and would throw money at broadwell-e server hardware to speed up their developer RTT, lol, so even when a budget is available it may not be spent correctly.
            No, linking is not a single-core job anymore. The article above is about the LLD linker. continue to read here. The GNU gold linker moves into the same direction.

            Sometimes developers need to make their needs known. If a company constantly doesn't provide reasonable equipment it's time to move on. Decent computer hardware is so cheap nowadays that this or the google hardware budget is absolutely no argument anymore. Even for OSS projects.

            ​​​​​​​
            Originally posted by linuxgeex View Post
            3) You're right, it's man-seconds per link. How many times will you link today? 10? 20? 200? Depends on the task. If you're checking patches and bisecting them for regressions and making small fixes as you go, 200 links a day is easy, if it takes a small number of seconds.

            And these delays can cascade when, inevitably, one developer, or a team, is waiting on another.
            200 links a day - yep, sounds reasonable. But if one developer has to wait for another, it's not the linkers fault.

            If you talk about checking patches and bisecting, then there are other factors which are equally, or even more important like the startup time or data loading time. If linking time is comes to be a problem, there's a lot a developer can do about it. (Do you really *always* *have to* link *the complete project* just to test feature X?) I usually tell my developers to be creative in solving these issues and they usually are. Isn't creativity one of the most valuable assets of a software developer?



            Comment


            • #16
              Seems lld isn't so good, I'm struggling to build llvm & clang using clang with lld works fine with gold

              Comment


              • #17
                Originally posted by lowflyer View Post
                Where do you draw the line between a small and a large project? The article refers to Google Chrome. Is Chrome large enough to bring across your point or do you have another (possibly hypothetical) project in your mind that is impossible with "average OSS" equipment? Have you yourself linked/managed a large project?

                I have worked on large projects, >1M lines of source, and I have waited for final links in the tens of seconds when my testing and editing took less time than the final link. I'm not going to name names, but if you yourself were performing final links and running the resulting executable over and over, you would stop, immediately, defending your position, because every second counts, deadline or not.

                Originally posted by lowflyer View Post
                When developers start to complain about linking time when discussing deadlines, then there's a really serious disconnect. But it's not related to the linker.
                Spoken like someone who has never done real work in their life.

                Originally posted by lowflyer View Post
                No, linking is not a single-core job anymore. The article above is about the LLD linker. continue to read here. The GNU gold linker moves into the same direction.
                I am not aware of a linker that performs multi-threaded final links. Above speaks to interleaving linking with compiling. I am not talking about compiling more than 0.1% of a project, so compiling is not part of the work in the argument. I am talking purely about the time for the final link to executable.

                Originally posted by lowflyer View Post
                Sometimes developers need to make their needs known. If a company constantly doesn't provide reasonable equipment it's time to move on. Decent computer hardware is so cheap nowadays that this or the google hardware budget is absolutely no argument anymore. Even for OSS projects.
                Sometimes the HR manager and Development team lead need to sit down for a moment and think about whether the cost of an equipment change is less than the cost of labor from not making an equipment change. If you want your developers to cry and beg for it you are doing something wrong. But I thought I made it very clear that I was talking about average OSS developers, working on their own personal time, on their own personal equipment, with a small-scale budget.

                Originally posted by lowflyer View Post
                200 links a day - yep, sounds reasonable. But if one developer has to wait for another, it's not the linkers fault.
                Correct. It's not a linker's fault that any large project is going to run into situations where there are dependencies in manpower. Avoiding those are a team management challenge. But everything that saves man-hours helps mitigate that issue. Surely, unless you are purely pretending to manage a team of developers, you can appreciate that simple fact.

                Originally posted by lowflyer View Post
                If you talk about checking patches and bisecting, then there are other factors which are equally, or even more important like the startup time or data loading time. If linking time is comes to be a problem, there's a lot a developer can do about it. (Do you really *always* *have to* link *the complete project* just to test feature X?) I usually tell my developers to be creative in solving these issues and they usually are. Isn't creativity one of the most valuable assets of a software developer?
                Startup time for what...? the linker? gnu make? gcc? gas? ranlib? Data loading time? Have you heard of RAM and SSDs? Startup time for the linked executable itself? Okay there I agree that the developers have full control over bypassing any setup steps not required to run their test case... but if their linker is taking longer than the time they'd save optimising for their test case - do you think they're going to bother?

                I honestly think you don't have a clue how heavy a task linking a large executable is.

                Even if you make zero changes whatsoever to your project, try timing your final link. You're in for some very educational results.

                Recently, for fun, I did some work on qbittorrent, which is by no means a large application:

                Code:
                sid@shiney:0.0~/src/qbittorrent-3.3.1$ rm /home/sid/src/qbittorrent-3.3.1/src/qbittorrentsid@shiney:0.0~/src/qbittorrent-3.3.1$ time make
                cd src/ && ( test -e Makefile || /usr/lib/x86_64-linux-gnu/qt5/bin/qmake /home/sid/src/qbittorrent-3.3.1/src/src.pro -o Makefile ) && make -f Makefile 
                make[1]: Entering directory '/home/sid/src/qbittorrent-3.3.1/src'
                linking qbittorrent
                make[1]: Leaving directory '/home/sid/src/qbittorrent-3.3.1/src'
                
                real    0m9.072s
                user    0m6.560s
                sys    0m1.288s
                sid@shiney:0.0~/src/qbittorrent-3.3.1$ rm /home/sid/src/qbittorrent-3.3.1/src/qbittorrentsid@shiney:0.0~/src/qbittorrent-3.3.1$ time make
                cd src/ && ( test -e Makefile || /usr/lib/x86_64-linux-gnu/qt5/bin/qmake /home/sid/src/qbittorrent-3.3.1/src/src.pro -o Makefile ) && make -f Makefile 
                make[1]: Entering directory '/home/sid/src/qbittorrent-3.3.1/src'
                linking qbittorrent
                make[1]: Leaving directory '/home/sid/src/qbittorrent-3.3.1/src'
                
                real    0m8.868s
                user    0m6.592s
                sys    0m0.992s
                sid@shiney:0.0~/src/qbittorrent-3.3.1$
                That's 9 seconds I'm twiddling my thumbs waiting to test a one-line change to the code, and I stress, this is not a huge project, it's only 62,646 lines of code!
                Last edited by linuxgeex; 16 March 2017, 01:27 AM.

                Comment


                • #18
                  Originally posted by linuxgeex View Post
                  I have worked on large projects, >1M lines of source, and I have waited for final links in the tens of seconds when my testing and editing took less time than the final link. <snip />
                  Originally posted by linuxgeex View Post
                  Spoken like someone who has never done real work in their life.
                  Originally posted by linuxgeex View Post
                  I honestly think you don't have a clue how heavy a task linking a large executable is.
                  Originally posted by linuxgeex View Post
                  That's 9 seconds I'm twiddling my thumbs waiting to test a one-line change to the code, and I stress, this is not a huge project, it's only 62,646 lines of code!
                  Ok, ok, ok, ok...

                  I give in.

                  You, linuxgeex are the one that knows because you have worked on large projects (which are projects that are >1M lines of code). These are things I can't possibly know because my projects are not large projects (they have less than 1M lines of code).

                  I will fully take on board your message and try to learn from your expertise. This is what I got so far:
                  1. Linking tasks are heavy. Always. Even if they take only 9 seconds.
                  2. The linking quality is a non-issue. Linking is always so trivial that it is absolutely not possible that there may be errors. Even the dumbest linker will always get it right.
                  3. Top-notch developers like you (that work on large >1M loc projects) are able to keep up a routine of edit-compile-test in a revolver like manner for hours. I will have to learn a lot here - I'm nowhere near your performance!
                  4. The linker time is the only real metric that we should use. In the future I will disregard all other times. The most useless time is probably my thinking time to come up with a meaningful edit.
                  5. One of the best arguments to defeat tight project schedules is the linker time. Thank you for this valuable tip.

                  Comment


                  • #19
                    Originally posted by lowflyer View Post
                    Ok, ok, ok, ok...

                    I give in.
                    Actually you misunderstand me. I don't want you to "give in". I think you actually care about this subject and you would honestly like to learn something.

                    So I encourage you to answer the questions I put to you about where the startup time comes from and whether developers have more or less control over that than they do over final link time. And I encourage you to go to the effort of timing the final link time of a middling C++ application, and try making trivial changes to that application and testing them.

                    I think you're going to come to a new conclusion: that the people who have put together engineering man-years of effort to make a faster linker have done so for a very good reason, and they deserve a metric ton of hero cookies. Perhaps you will begin to feel gratitude towards those people, and perhaps you will gain some insight into the plight of your developers, and possibly you can bring them LDD and they will love you for it.

                    Comment


                    • #20
                      Originally posted by lowflyer View Post
                      Ok, ok, ok, ok...

                      I give in.
                      You misunderstand me. I don't want you do give in. I think you give a damn and that's a good thing.

                      Several teams of engineers have invest man-years of labor working to make faster linkers. I think they deserve a metric ton of hero cookies.

                      Please, take the time to answer my questions re: where you feel that time is squandered in the edit-compile-test cycle, and please give some consideration to my assertion that programmers have zero control over how long the final link takes, vs startup time - which you assert takes so much longer that link time is irrelevant. In my own experience I am able to code a few jumps which to take me near-instantly to my test case and startup time is in the low milliseconds, but I've never been able to achieve such a time savings with a linker, so for me the linker is always the one thing I am waiting on. Maybe your developer team is working differently, maybe you could suggest this technique to them.

                      I've also asked you to try actually timing your final link and considering whether you feel waiting for that 100x a day is a good use of your developer man hours. LDD might save you some labour $$$ and/or help get your product to market sooner. Maybe your developers will think you're a superhero when you bring them LDD. Maybe they'll think you're a slave driver, if they enjoyed the little breaks, lol...

                      Comment

                      Working...
                      X