Announcement

Collapse
No announcement yet.

Mesa CI Optimization Could Provide Big Bandwidth Savings

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by skeevy420 View Post
    This makes me wonder how long it'll be until clouds become cost prohibitive and if we'll start seeing companies and projects go back to self-hosting. 70K covers a lot of hardware & electricity costs.
    This is a "we configured it like shit" situation where someone just migrated stuff from a local system to the cloud without changing all assumptions that were made in the system architecture, like that for example on a LAN you don't give a shit about data caps so you can just have your servers constantly sending stuff to each other. If moving to the cloud you need to move around things or join multiple "servers" into a single larger one to avoid that situation.

    it's not a normal price for what they are doing.

    Comment


    • #12
      They did this to themselves. Nobody asked them to run their own GitLab instance. They did it out of pride.
      Those $70k/year would be 0 if they just used gitlab.com from the beginning.

      Comment


      • #13
        They made the mistake of not closely monitoring their costs and realising they needed to get someone to set the whole thing up properly. This should've been flagged on week 1. I'm sure they won't be making those mistakes again.

        Being a Linux expert is not qualification enough for tuning potentially expensive cloud deployments.

        Comment


        • #14
          Originally posted by paulpach View Post
          They did it out of pride.
          No, they did it out of commitment to open source. gitlab.com runs a build with a bunch of proprietary extensions.

          Comment


          • #15
            Originally posted by polarathene View Post
            I'm a bit surprised they racked up 70k... would have thought they'd have anticipated/measured costs early on or prior to transition while evaluating it.
            The 70k figure is based on projection of the year's cost made back in Feb (before work started to identify where the excessive egress was coming from and reduce it). So yeah, egress (and related costs) were higher than expected, but it was caught early and work started to address the situation.

            Comment


            • #16
              Wasting resources on cloud stuff is actually quite common. Not at this scale, but it all adds up when small projects waste few resources here and there. I've worked in few places where they actually started optimizing CI or even the web sites when it all became almost unbearable. It's not premature optimization, they only wake up when the overhead is wasting say 80 or 90% of the budget. IMHO that's already too late.

              Especially when talking about open source projects (small ones), it annoys me that they complain about the small budget and hosting costs, but refuse to try any cheaper plans. For example, if you run a wordpress site, why not host yourself or use one of the free services, why you need to start with the $25 plan if you have almost no audience and server load avg would be close to 0.05. Same thing you have a personal git with almost no users and load, why not pick a cheap $10 VPS, why do you need the minimum service level of the $25 plans. After choosing the hardware, you can easily waste space and memory by running bloated distros and every service in dedicated containers that don't share any libraries.

              Comment


              • #17
                Originally posted by caligula View Post
                Wasting resources on cloud stuff is actually quite common. Not at this scale, but it all adds up when small projects waste few resources here and there. I've worked in few places where they actually started optimizing CI or even the web sites when it all became almost unbearable. It's not premature optimization, they only wake up when the overhead is wasting say 80 or 90% of the budget. IMHO that's already too late.

                Especially when talking about open source projects (small ones), it annoys me that they complain about the small budget and hosting costs, but refuse to try any cheaper plans. For example, if you run a wordpress site, why not host yourself or use one of the free services, why you need to start with the $25 plan if you have almost no audience and server load avg would be close to 0.05. Same thing you have a personal git with almost no users and load, why not pick a cheap $10 VPS, why do you need the minimum service level of the $25 plans. After choosing the hardware, you can easily waste space and memory by running bloated distros and every service in dedicated containers that don't share any libraries.
                fd.o's volunteer sysadmin is overworked and cloud hostings offloads him.. alternative is hiring an sysadmin (not to mention paying for hw/etc)

                so yeah, maybe easy to self-host a low traffic blog, but that is not what fd.o is..

                Comment


                • #18
                  Originally posted by JustinTurdeau View Post
                  Premature optimization is the root of all evil. Except when it saves you $70K.
                  how is fixing an existing issue on ci env a premature optimization?

                  Comment


                  • #19
                    You explicitly don't want to cache the repository. You want your build agent to be a clean environment each time in order to ensure reproducible builds. Caching this is a rookie DevOps mistake that will bite them in the ass within a year (always does) when some transient dependency gets dropped or modified (but the old one hangs around).

                    If you use a modern SCM you can use web hooks to notify jenkins of events. The most sensible solution is to send web hook notifications for merges into the master/release branches and when a pull request is opened (or an open pull request is modified). That will massivily reduce bandwidth costs.

                    Comment


                    • #20
                      Originally posted by JustinTurdeau View Post
                      Premature optimization is the root of all evil. Except when it saves you $70K.
                      by that time it is overdue optimization

                      Comment

                      Working...
                      X