Maybe I should be glad I don't always compile the latest and greatest kernel for all the relatively new 32 bit only X86 hardware I have under maintenance.
Announcement
Collapse
No announcement yet.
Linux's 32-Bit Kernel Has Been Buggy Since Being Mitigated For Meltdown
Collapse
X
-
It's remarkable how many people in the comments are confusing 64bit processing with 64bit memory addressing. 32bit programs can use 64bit memory addressing, 64bit processing is to do with enabling the CPU to do larger calulations, store larger numbers in their registers and improving performance by enabling new optimisations.
- Likes 1
Comment
-
Originally posted by skeevy420 View Post
So do I; that doesn't change requiring the use of best the possible hardware available in critical environments like hospitals where every second matters.
- Likes 1
Comment
-
Most Linux developers are employed by companies tied to clouds or server computer markets. They do not care about desktop users. All they care about is recent enterprise server hardware made within the past 4 years that would fill some data center. These sorts of people rule the roost. Ask Con Kolivas about it. Want to help Linux's desktop users? Linux's corporate developers will tell you get lost.
They do not care about desktop users, home users, schools, libaries, third world countries, and so on. They do not care about the e-waste problem, they are part of it. They use a server for maybe a few years and then toss it into a landfill or a third world scrap heap where people get lead poisoning from it. Maybe these companies should take some environmental responsibility and social responsibility for once and maybe donate used server hardware to schools and libraries with Linux installed rather than toss it all into landfills.
- Likes 1
Comment
-
Originally posted by Spazturtle View PostIt's remarkable how many people in the comments are confusing 64bit processing with 64bit memory addressing. 32bit programs can use 64bit memory addressing, 64bit processing is to do with enabling the CPU to do larger calulations, store larger numbers in their registers and improving performance by enabling new optimisations.
Lets debunk a few myths. 32 bit CPUs are still perfectly fine and work well for many applications. Support for 32 bit legacy programs on 64 bits CPU does not add overhead. This is because 32 bit and 64 bit share the same CPU circuitry and the same instruction set, meaning there is virtually no overhead in supporting 32 bit.
Supporting 32 bit is important because of the large amount of applications that only run on 32 bit mode. Many companies for instance only have a license for a 32 bit version of an application and many desktop users use 32 bit games for which there will never be a 64 bit release.
- Likes 1
Comment
-
Originally posted by Neraxa View Post...
On top of that these 3gigs include disk cache, if you thought this wasn't bad enough. Also PAE incurs kernel cpu overhead.Last edited by xpue; 30 July 2019, 11:54 AM.
Comment
-
Originally posted by xpue View Post32 bit does NOT allow for specific process to access more than 3gb of ram, however. More specifically, many programs run out of address space, not actual ram even.
On top of that these 3gigs include disk cache, if you thought this wasn't bad enough. Also PAE incurs kernel cpu overhead.
- Likes 1
Comment
-
This whole thread has been very interesting. But I think a lot of you are missing the point of why it's possible for a significant bug to go unnoticed in a 32-bit linux kernel. There's not enough users to support it.
A quick disclaimer, I'm the maintainer of Liquorix kernel. I just removed support for 32-bit on the 5.2 kernel due to a report that my 32-bit kernels don't work for some super obscure bug [1]. What this tells me is that no one is testing 32-bit anymore other than, maybe it boots in a VM.
Plus, if one is using a 32-bit OS, what other things are they not upgrading? The feedback loop from introduction of a 32-bit specific bug to detection is approaching over a year. This means no one cares anymore, and the users that do care aren't willing to beta test by upgrading early and often, but only when they're forced to due (contracts, EOL software, client pressure, etc). Then when the bug is reported it's always extraordinary, like rampant memory corruption, something that should have been caught in days of release.
Also, I looked at the stats of those downloading 32-bit kernels versus 64-bit. For Liquorix it's 1 out of every 250, on average. To put that in perspective, to build a 32-bit kernel for both PAE and non-PAE takes twice the storage and twice the CPU power, but almost no one uses them. The only way you could feasibly support this configuration is if your test hardware, process, and time is subsidized by an organization that has spare time, hardware, and people to go around. For most people in open source, that means 32-bit is only being compiled because it can be compiled, not that the compiled code works reliably, or at all.
Yep, 32-bit as the OS is dead, and all these announcements that 32-bit images (and even packages), are going away are the symptoms that we've supported it for too long.
[1] https://techpatterns.com/forums/about2734.html
- Likes 2
Comment
-
Originally posted by Neraxa View Post64 Bit CPUs dont even use 64 bit addressing. They use 40 or 48 bit addresses. No consumer or business computer uses even enough RAM to fill the 40 bits.
Comment
Comment