Originally posted by mSparks
View Post
Announcement
Collapse
No announcement yet.
Red Hat Tries To Address Criticism Over Their Source Repository Changes
Collapse
X
-
-
Originally posted by Almindor View Post
Of course and nothing prevents you from using say a dedicated cloud hosting with some fairly beefy setup. This is the perfect use case but I still fail to see why Red Hat would be a value added compared to even something like Arch (in fact I'd argue Arch is just better).
glibc is forward compatible - so you can run code built using 2.18 on a 2.31 system, but not backward compatible - you can't run code compiled against glibc 2.31 on a 2.18 system.
beyond that I dont really know why anyone cares that much either.
Ive been running a ton of discord bots on a PI plugged in behind the TV which is Debian based iirc (not sure, because its not the default raspian), thats been bullet proof for like 2 years now, probably better uptime than facebook and twitter.
Comment
-
Originally posted by Almindor View Post
Pretty much everything you put here has been solved by Docker (and others) for 10 or so years now, maybe 6 years "production grade" at this point. Hence my question, who would buy such a thing now?
Comment
-
Originally posted by fitzie View Post
docker only splits the OS that the hardware sees from the OS that the developers see. That's a good thing, but doesn't solve the stability but maintained thing. you can use any old docker image forever and for free, but once you want to update it due to a security or bug fix, then you have a problem.
The days of "2 years uptime" as a flex are over. Anything over 3 months is considered a security vulnerability now. There are exceptions of course where you WANT long uptime, such as DB hosts etc. but for the most service-level stuff, you want to cycle things.
I can understand that self-hosted requires stability on the 1st hw/os/hypervisor layer, I guess having commercial support there could have value, but it seems more like back in the 2000s "to placate the CTO" kind of thing.
- Likes 1
Comment
-
Originally posted by Almindor View Post
You never run things for too long in modern setups. Most companies, especially ones with finance involved require maximum 30 days uptime for their servers to ensue updates as well as somewhat mitigate possible exposure if someone managed to get a backdoor in.
The days of "2 years uptime" as a flex are over. Anything over 3 months is considered a security vulnerability now. There are exceptions of course where you WANT long uptime, such as DB hosts etc. but for the most service-level stuff, you want to cycle things.
I can understand that self-hosted requires stability on the 1st hw/os/hypervisor layer, I guess having commercial support there could have value, but it seems more like back in the 2000s "to placate the CTO" kind of thing.
- Likes 1
Comment
-
Originally posted by Almindor View Post
You never run things for too long in modern setups. Most companies, especially ones with finance involved require maximum 30 days uptime for their servers to ensue updates as well as somewhat mitigate possible exposure if someone managed to get a backdoor in.
The days of "2 years uptime" as a flex are over. Anything over 3 months is considered a security vulnerability now. There are exceptions of course where you WANT long uptime, such as DB hosts etc. but for the most service-level stuff, you want to cycle things.
I can understand that self-hosted requires stability on the 1st hw/os/hypervisor layer, I guess having commercial support there could have value, but it seems more like back in the 2000s "to placate the CTO" kind of thing.
Comment
Comment