So you would need a special 64 bit kernel with a 32 bit userland and a modified compiler.. The advantage would of course be that you can store twice as many 32 bit integers as 64 bit integers.
Sure, you need a special kernel. I don't know if it will ever make to the official kernel though. And you need all libraries and the appropriate applications compiled for this ABI. One can argue that if you need to recompile anyhow you better recompile for x86_64. But (1) not all applications are 64-bit safe at the source level and (2) the developers of x32 claim a hefty advantage in performance over x86_64, sometimes up to 30%.
Originally Posted by ChrisXY
In any case, given only 1GB RAM, or less, I would run 32-bit Linux. Especially with a Core 2 processor. The Cor 2 microarchitecture has an optimization (micro-ops fusion I think) which is only available in 32-bit. I don't know if that means that the OS should be running in 32-bit mode or the 32-bit tasks under a 64-bit kernel still get to use it. But for 64-bit tasks it is certainly not there. I think Nehalem was the first Intel microarchitecture to extend micro-ops fusion to 64-bit mode. Certainly, Core 2 is a bit outdated nowadays but there are still plenty of machines running and those are more likely to have less than 2GB anyhow. This should not apply to AMD processors which has always had superior 64-bit implementation, although today they are lagging behind Intel in all departments (except, maybe, for Atom vs Brazos).
But given 2GB of RAM I would run 64 bit kernel even with a Core 2 processor. There is a peculiarity in Linux WRT memory handling that might cause a small performance hit, although some people claim it's negligible. The kernel keeps the physical memory mapped in the kernel area memory space. With the default 3G/1G split, there's not enough virtual memory space to map the entire memory. That applies to 1GB as well but is certainly more severe, the more memory you have. Now, I'm not a Linux kernel expert and I don't know all the ramifications of having a big HIGHMEM (as it's called) area but I just don't want to go there. I've been happily using 64-bit Linux for more than 3 years already but then, I have 8GB of memory, so it's really a no brainer to me. And I like the idea of 64-bit anyhow. There's additional consideration for floating point-heavy applications. x86_64 can benefit from using SSE2 for floating point by default. Of course, 32-bit applications can also use it if it's present but they have to be compiled appropriately (runtime identification requires specially written applications/libraries). Since most 32-bit distributions are compiled for lesser architectures they don't automatically use SSE2 if present. I guess this advantage is mostly theoretical but with 64-bit OS and applications I don't have to worry about such things (and I do worry - I'm a geek).
There is, of course, the problem of some applications being 32-bit only. The ones I'm using are Skype, Acrobat Reader and WINE. For those I have a 32-bit root jail managed with the schroot package. I don't want to pollute my 64-bit environment with 32-bit crap and root jails allows me to keep them isolated. And when I reinstall my Linux I simply copy over the root directory of the jail and the schroot configuration and there I have it. There was also a problem with Skype under Ubuntu 11.10. It depends on a 32-bit package that is not present for 64-bit installs (they officially support 10.04 LTS). That may be fixed now but I really don't want to spend too much time just to make Skype work.
As several people have noted, in this discussion and previous related ones, it will also be interesting to see how 32-bit userspace performs under 64-bit kernel. This can easily be tested with a root jail - you can have a complete distribution (without the kernel, obviously) and do whatever you like without affecting the main installation. But Michael seem to be impervious to such pleas
Last edited by kobblestown; 03-02-2012 at 04:40 AM.
True, but the performance you gain from micro-op fusion is negligible compared to what you can gain from having 8 additional general purpose registers.
Originally Posted by kobblestown
So how do you switch?
I've read there is no way to upgrade. People just recommend to make a copy of /home and reinstall the system.
Maybe that's ok for some users, but I find that recommendation a bit dangerous because:
- I have settings for things like Apache and MySQL
- There is the /var folder where my www lives
- I changed settings I no longer remember to lower the latency of the sound card.
- I did some links to fix libraries that didn't work by default.
- Where were those USB settings to make my Android phone visible?
- I made my /tmp folder a ram disk.
- I changed system settings to make the SSD faster.
And much more. Basically, two years tweaking my system. I should have kept a log...
What would you do?
The reason to move to 64bit is that programs like Darktable seem to be much more stable in 64bit, and video rendering would be apparently faster.
Not sure why you bothered with 32 bit to begin with then. Anyways, if you have another hard drive I'd suggest the easiest method would be to install everything you have on that and copy your settings and stuff over from your old setup. That way, if you forget how you set up something it won't be a big deal because you can just reboot into your old setup. When you think you've got everything configured, just clone that hard drive to the one with your old setup.
Originally Posted by hamoid
I'm not sure about ubuntu but with debian i'm aware you can install a 64 bit kernel along side a 32 bit. i have no idea how that works when half the system would use 32 bit binaries.
Thanks for the ideas. I have an external esata disk which is perfect for that.
It's also a good way to test if the 64bit is really working fine, and if 12.04 runs ok on my laptop.
Why I bothered with the 32bit version? Because at that time there was no reason to install the 64bit version.
Flash was not working on 64bit, I was not editing video, and Darktable did not exist.