Announcement

Collapse
No announcement yet.

Red Hat Tries To Address Criticism Over Their Source Repository Changes

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by mSparks View Post
    Cloud pricing scales perfectly,
    For the computing power of a $15 raspberry pi, you will pay about $15 per month to host it on the cloud
    For the computing power of a $15,000 AMD EPYC server, you will pay about $15,000 per month to host it on the cloud

    For some use cases it can get about equal - once you add in power consumption and network traffic costs (e.g. a very large amount of network traffic on a small amount of compute)

    But I think you will cry if you costed up how much that self hosting would cost to host/expand into the cloud.

    For us GCP would have been like $35 million a year, and AWS was like double or triple that. most of which is now served by a $200 a month Digital Ocean droplet or two and locally hosted server hardware for all the heavy stuff.
    Of course and nothing prevents you from using say a dedicated cloud hosting with some fairly beefy setup. This is the perfect use case but I still fail to see why Red Hat would be a value added compared to even something like Arch (in fact I'd argue Arch is just better).

    Comment


    • Originally posted by Almindor View Post

      Of course and nothing prevents you from using say a dedicated cloud hosting with some fairly beefy setup. This is the perfect use case but I still fail to see why Red Hat would be a value added compared to even something like Arch (in fact I'd argue Arch is just better).
      For me the "value added" of the RHEL clones was the were the "best" place to get a fully patched system with an old enough glibc that build doesnt rub into glibc backward compatibility issues when distributing binaries.

      glibc is forward compatible - so you can run code built using 2.18 on a 2.31 system, but not backward compatible - you can't run code compiled against glibc 2.31 on a 2.18 system.

      beyond that I dont really know why anyone cares that much either.

      Ive been running a ton of discord bots on a PI plugged in behind the TV which is Debian based iirc (not sure, because its not the default raspian), thats been bullet proof for like 2 years now, probably better uptime than facebook and twitter.

      Comment


      • Originally posted by Almindor View Post

        Pretty much everything you put here has been solved by Docker (and others) for 10 or so years now, maybe 6 years "production grade" at this point. Hence my question, who would buy such a thing now?
        docker only splits the OS that the hardware sees from the OS that the developers see. That's a good thing, but doesn't solve the stability but maintained thing. you can use any old docker image forever and for free, but once you want to update it due to a security or bug fix, then you have a problem.

        Comment


        • Originally posted by fitzie View Post

          docker only splits the OS that the hardware sees from the OS that the developers see. That's a good thing, but doesn't solve the stability but maintained thing. you can use any old docker image forever and for free, but once you want to update it due to a security or bug fix, then you have a problem.
          You never run things for too long in modern setups. Most companies, especially ones with finance involved require maximum 30 days uptime for their servers to ensue updates as well as somewhat mitigate possible exposure if someone managed to get a backdoor in.

          The days of "2 years uptime" as a flex are over. Anything over 3 months is considered a security vulnerability now. There are exceptions of course where you WANT long uptime, such as DB hosts etc. but for the most service-level stuff, you want to cycle things.

          I can understand that self-hosted requires stability on the 1st hw/os/hypervisor layer, I guess having commercial support there could have value, but it seems more like back in the 2000s "to placate the CTO" kind of thing.

          Comment


          • Originally posted by Almindor View Post

            You never run things for too long in modern setups. Most companies, especially ones with finance involved require maximum 30 days uptime for their servers to ensue updates as well as somewhat mitigate possible exposure if someone managed to get a backdoor in.

            The days of "2 years uptime" as a flex are over. Anything over 3 months is considered a security vulnerability now. There are exceptions of course where you WANT long uptime, such as DB hosts etc. but for the most service-level stuff, you want to cycle things.

            I can understand that self-hosted requires stability on the 1st hw/os/hypervisor layer, I guess having commercial support there could have value, but it seems more like back in the 2000s "to placate the CTO" kind of thing.
            I'll just point out that uptime for a service =/= uptime for servers. With appropriate redundancy you can provide a service continuously whilst restarting individual servers. It does require some discipline and understanding of the necessary architecture, but it is possible. This is obvious, but a lot of people who shouldn't confuse the two.

            Comment


            • Originally posted by Almindor View Post

              You never run things for too long in modern setups. Most companies, especially ones with finance involved require maximum 30 days uptime for their servers to ensue updates as well as somewhat mitigate possible exposure if someone managed to get a backdoor in.

              The days of "2 years uptime" as a flex are over. Anything over 3 months is considered a security vulnerability now. There are exceptions of course where you WANT long uptime, such as DB hosts etc. but for the most service-level stuff, you want to cycle things.

              I can understand that self-hosted requires stability on the 1st hw/os/hypervisor layer, I guess having commercial support there could have value, but it seems more like back in the 2000s "to placate the CTO" kind of thing.
              i run a ceph cluster at home. I reboot them all the time and apply security fixes, but upgrading it every six months to a major release would be a total disaster. for software development it is the same story, abi changes are a huge deal, and abi stablilty is rarely something upstream pays too much attention to. there's plenty of demand for a long supported release, which is why everyone is grabbing copies/clones of rhel.

              Comment


              • title should be: " Red Hat fails To Address Criticism Over Their Source Repository Changes"

                Comment

                Working...
                X