Announcement

Collapse
No announcement yet.

ESR Switches To Threadripper But His GCC SVN-To-Git Conversion Could Still Take Months

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ESR Switches To Threadripper But His GCC SVN-To-Git Conversion Could Still Take Months

    Phoronix: ESR Switches To Threadripper But His GCC SVN-To-Git Conversion Could Still Take Months

    It looks like the saga of converting the GNU Compiler Collection (GCC) source tree from SVN to Git isn't over yet and could still take months until completion...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Isn't this a one time conversion? Or are they going to have to do this repeatedly? If it's one time-- or even less than 10-- it seems like a good solution (read: cheaper) for throwing a large cloud VM at it and just letting it brute force things...
    All opinions are my own not those of my employer if you know who they are.

    Comment


    • #3
      Originally posted by Ericg View Post
      Isn't this a one time conversion? Or are they going to have to do this repeatedly? If it's one time-- or even less than 10-- it seems like a good solution (read: cheaper) for throwing a large cloud VM at it and just letting it brute force things...
      It's one-time. once they transition to that Git workflow.
      Michael Larabel
      https://www.michaellarabel.com/

      Comment


      • #4
        Originally posted by ESR
        I just took delivery of a newer, faster surgical machine - an AMD
        Threadripper cranking 4.2Ghz on 64 hardware threads (thank you, System76 for the donation). I upgraded specifically to tackle the GCC problem.
        Oh good, someone upgraded ESR's tools again because he blamed his constant failures on them. Wonderful.

        Back at the end of 2014 he got a free custom computer from Tek Syndicate explicitly for "repository surgery". At least part of it was funded with donations. Specs:
        • 3.5Ghz Xeon E5-1650v3 (6 core Haswell with HT)
        • 32GB PC4-2133 DDR4 ECC Ram
        • 512GB M.2 PCIE SSD
        • 3TB HGST NAS drive
        • 1000W PSU
        It's not a monster by today's standards though it would've been a really strong, expensive contender for the time. It's also still significantly better than what most people start with. There's a video on Youtube about it (titled "Build: Linux Workstation for Eric S. Raymond | Meet To Mega Therion"). I don't suggest you watch it as it's beyond tedious, but the info's there if you want to verify for yourself.

        Originally posted by ESR
        Early benchmarks on repocutter-in-Go suggest I'll get at least a 15x speedup relative to the Python version of reposurgeon, possibly rather more.
        Is that before or after he got another free high-end workstation? I think I remember this "benchmark" number from the last time he dodged responsibility, so I'm guessing before.

        Originally posted by ESR
        The main source of uncertainty here is how long it will take me to finish the Go translation of reposurgeon. Once that's done I don't expect a finished conversion to take more than a couple of weeks to produce. The good news is the translation is now over 90% done; the bad news is that the part still pending includes the part you really care about, the Subversion dump interpreter.
        Why on earth is he (re-)writing a tool that will still take him several weeks to get usable output from? Also, it's good that he did all the easy work. Normally that's not something to complain about, but it's been nearly a year with no progress and all he's done is whine, get free shit, do the easy stuff, and take credit.

        Originally posted by ESR
        I realize this may not be the best place to ask, but the most effective way to speed things up would be to second me somebody with Go skills to help with finishing and debugging the translation. The blocker is not Go knowledge per se, it's the intrinsic complexity of the code I'm translating. Even with my skills and domain expertise this is a tough job.

        I will further add that a really motivated expert C programmer would do for the help. Go is an easy enough transition for a C expert that it would be practical to learn the language while assisting with this.
        And now he needs help, of course. Not because he doesn't understand Go, but because the code he wrote is so complex. Last time this came up, people popped into the reposurgeon repo and found it was a bunch of borderline-unmaintainable spaghetti code. It's not a difficult problem or a complex solution, but a bad implementation with most of Reposurgeon being the code in the 637KB "reposurgeon" file with 14K lines in it.

        Good news, though, he's writing his new golang replacements in the same git repository in the master branch because of course that makes sense. Keep in mind, he's really good at Go. That's why he's only been working on porting this shit code for 9 months. First commits on the Go version of repocutter is from 2018-08-23. It's okay, though, it was his computer slowing him down, so he got a free one just to figure out he still needs help.

        Please, no one give him anymore free stuff just because he whines. We have kernel developers contributing far more useful code who are stuck on far worse systems than the ones he has to upgrade from. Also, please don't assign him any more important blocking tasks. And for goodness' sake, don't accept his opinions. Less than 20% of what he says has any value.

        Comment


        • #5
          Wow man, that was pretty harsh criticism.

          Comment


          • #6
            By October, he'll have upgraded to a full Epyc cabinet so he can start working on the machine learning program that will write reposurgeon for him

            Comment


            • #7
              Perhaps he could improve the performance of his machine by switching to Xfce like the Swedish people?

              Comment


              • #8
                An elegant solution that takes longer to solve a problem is almost always preferable to an unwieldy but fast solution .... given time itself is not your primary constraint, and in this case it certainly is not. As long as his algorithem processes through the commits much faster than they are being added ... what would it matter if it took years to process. He could then move on to solving other problems while he waited on the solution to be produced, instead litte progress has been made and he has an unwieldy codebase. One would also question if Go is even a good language for writing code that must be fast and memory efficient... as it isn't known for this. The fact that he has written in unelegant program in Go is also quite shameful...because elegance is one of the things Go is known for.

                Comment


                • #9
                  Originally posted by Terrablit View Post
                  Please, no one give him anymore free stuff just because he whines. We have kernel developers contributing far more useful code who are stuck on far worse systems than the ones he has to upgrade from. Also, please don't assign him any more important blocking tasks. And for goodness' sake, don't accept his opinions. Less than 20% of what he says has any value.
                  I can't agree with this vigorously enough. Anyone who thinks this is an exaggeration or hatchet job against ESR -- I highly recommend finding out for yourself and reading some of his code. It's truly some of the most awful, bug-ridden spaghetti you'll ever see.

                  If you've only ever read some of his (many) grandiose self-congratulations before and have had no contact with his code, you might have assumed he's a bit of an egomaniac who is just slightly exaggerating his abilities. The reality couldn't be further from the truth -- he's a pariah with a god complex. 90% of his OSS contributions have been adding as many self-attributions as possible to other people's codebases. Probably 50% of the terminfo sources in ncurses are just ESR giving credit to himself and typing out his own name as many times as possible.

                  Comment


                  • #10
                    Originally posted by cb88 View Post
                    An elegant solution that takes longer to solve a problem is almost always preferable to an unwieldy but fast solution .... given time itself is not your primary constraint, and in this case it certainly is not. As long as his algorithem processes through the commits much faster than they are being added ... what would it matter if it took years to process. He could then move on to solving other problems while he waited on the solution to be produced, instead litte progress has been made and he has an unwieldy codebase. One would also question if Go is even a good language for writing code that must be fast and memory efficient... as it isn't known for this. The fact that he has written in unelegant program in Go is also quite shameful...because elegance is one of the things Go is known for.
                    See, that's just you looking at the whole debacle through the eyes of a logically minded person. To ESR, this is just an opportunity to heap congratulations on himself, whilst also twisting the truth to get as much free stuff as possible from people who are unaware of his track record.

                    If you think ESR has ever done any good for the open source community, you might just be a victim of his propaganda and self-marketing. His code is uniformly awful and his social/collaboration skills are non-existent.

                    It's no coincidence that he leaves a very bitter taste behind wherever he inserts himself. The large (and ever growing) body of people who publicly criticise him don't do so out of jealousy or for no reason at all.

                    Comment

                    Working...
                    X