Announcement

Collapse
No announcement yet.

Mesa CI Optimization Could Provide Big Bandwidth Savings

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Mesa CI Optimization Could Provide Big Bandwidth Savings

    Phoronix: Mesa CI Optimization Could Provide Big Bandwidth Savings

    You may recall that earlier this year X.Org/FreeDesktop.org may have to cut CI services for developers over the cloud expenses associated with that continuous integration service for the likes of Mesa, the X.Org Server, and other components. CI usage was leading to a lot of bandwidth consumption so much so that the X.Org Foundation is facing potential ~70k USD cloud costs this year largely from their continuous integration setup...

    http://www.phoronix.com/scan.php?pag...I-Cost-Savings

  • #2
    Premature optimization is the root of all evil. Except when it saves you $70K.

    Comment


    • #3
      About time.

      What bothers me, why is this change to the mesa repository? If I was in charge of their CI setup, I'd begin with reconfiguring the builders. Make it so that you technically won't even be able to run something that would burn through money. Set up caches for everything known, and restrict everything else.

      Comment


      • #4
        This makes me wonder how long it'll be until clouds become cost prohibitive and if we'll start seeing companies and projects go back to self-hosting. 70K covers a lot of hardware & electricity costs.

        Comment


        • #5
          Originally posted by intelfx View Post
          If I was in charge of their CI setup
          It's open source. You could contribute.....but you're not.

          Gotta love those people who say "WhY DoN'T tHeY jUsT ..... ?", but then contribute nothing of value.

          The armchair expert strikes again.

          Comment


          • #6
            Originally posted by intelfx View Post
            About time.

            What bothers me, why is this change to the mesa repository? If I was in charge of their CI setup, I'd begin with reconfiguring the builders. Make it so that you technically won't even be able to run something that would burn through money. Set up caches for everything known, and restrict everything else.
            I'm a bit surprised they racked up 70k... would have thought they'd have anticipated/measured costs early on or prior to transition while evaluating it.

            They probably could have still used a self-host CI service with cloud integrations and put that on an affordable bare metal server for rent like Hetzner offers. Wouldn't cost so much then. I'm guessing that a lot of unnecessary builds might have been run or something... didn't read past the article to see if there was a breakdown for how they managed a 70k bill..

            Comment


            • #7
              Originally posted by JustinTurdeau View Post

              It's open source. You could contribute.....but you're not.

              Gotta love those people who say "WhY DoN'T tHeY jUsT ..... ?", but then contribute nothing of value.

              The armchair expert strikes again.
              If they can afford the risk of getting up to 70k in CI costs, they could afford to pay someone for much less to assist if they lacked the proper domain experience, rather than expect voluntary contributions.

              FWIW, I contribute to other projects. Some where budget is a real concern up front for how things are approached, which unfortunately means more effort and takes longer than throwing money and hoping for the best(not saying that's what happened here, I don't know the details that lead up to that bill).

              Comment


              • #8
                Originally posted by JustinTurdeau View Post
                It's open source. You could contribute.....but you're not.

                Gotta love those people who say "WhY DoN'T tHeY jUsT ..... ?", but then contribute nothing of value.

                The armchair expert strikes again.
                I hate this argument. Just because something is open source, that doesn't mean you're able to contribute. Even when something directly affects you, that doesn't mean you have the skills, knowledge, or authority to change something, let alone in a reasonable amount of time. Most of the time when someone proposes an idea in this sort of context, they just found out what the problem was, so even if they were going to contribute, there's a lot to catch up on. In this particular situation, it seems the decision has already been made, so at this point any armchair suggestions are more of a "what if" conjecture, since it is too late to take a different route.
                Also, making suggestions is a contribution of value. Sure, not everyone has a firm grasp of how everything works and therefore not everything they have to say is meaningful, but just because something is open source, that doesn't make it immune to criticism.

                Besides, consider the contributors toward the Linux kernel. They don't want to be an armchair expert, they actually want to make a difference. So, they do - they write their patches and submit them. Then Linus sees their work and if it isn't good enough to his standards, its denied, sometimes (not so much anymore) in a discouraging manner. Just because something is open-source, doesn't mean you're going to get what you want or that your actual work and contributions will be accepted.

                The point of a comments section is for conversation and ideas. Perhaps intelfx's idea is ok but needs work. A comments section is a good place to spark interest, where perhaps if there is time to make a difference, one could actually be made, or encouraged. All I see you doing is shutting down any chance of a different and possibly better approach. And you accuse him of contributing nothing of value?
                Last edited by schmidtbag; 07-03-2020, 10:07 AM.

                Comment


                • #9
                  Originally posted by JustinTurdeau View Post
                  It's open source. You could contribute.....but you're not.
                  Projects don't let random people design and set up their CI infrastructure what the fuck are you talking about. At best you can add scripts for tests and such if you are just a contributor.

                  Comment


                  • #10
                    I believe the expectation was that when gitlab offered to host instances of the Mesa/fd.o repositories as an open source project, a lot of people thought that the hosting and bandwidth was fully covered. It seems like gitlab instead applied a credit for however much an average project uses in bandwidth.

                    Mesa and associated projects did a lot of automated CI builds and I'm guessing shipped full repository clones per build and sent a lot of artifacts around to hardware for integration testing, leading to that credit getting consumed quickly.

                    One of the first things done once the problem was recognized was to disable the per-push CI and instead only do build tests as part of a pre-merge workflow. Some additional work was spent in reducing the data transfers to hardware for integration testing.

                    I'm guessing that they now smartened up the pipeline to not start from an empty repo anymore, but instead pull the commit into an existing clone. Saves bandwidth, and should be ok. Might introduce some mild complicating factors into the build logic, but probably nothing too bad.

                    ​​

                    Comment

                    Working...
                    X