This is largely correct in principal, but wrong in ways that make my brain hurt.
Originally Posted by snadrus
To clarify - the instructions themselves are not any shorter. Shorter instructions wouldn't allow more to run anyway, as the cpu is limited by the # of instructions it can execute, not their size.
What is shorter is the memory addresses the program uses, which are register values the instructions operate on. Anyone who's looked at c/c++ code has seen pointers all over the place, and all of them are half the size in x32 code. I think general test cases have shown it to result in about 10% savings overall for an average codebase, although of course some programs will be more affected than others.
The smaller pointers do allow more of the program to be cached on the chip in various places, which is what can give a speedup.
example - code like
mov eax ebx
the mov instruction is the same size no matter what, it's the size of the value in eax/ebx that changes. Well, except you have to address it differently, but you get the point.
Last edited by smitty3268; 04-26-2013 at 09:57 PM.
So that means a catch-all system for typical desktop use would have something like this?
But I still don't see the point of x32. I mean, if you want to run 32bit stuff in a 64bit distro, you pull in the standard 32bit libraries from the package manager (like glibc, libstdc++, alsa, pulse, etc etc) and everything is fine and dandy since the combination of i686 and x64 libraries catches most, if not all the stuff needed for x64 AND 32-bit support on an x64 operating system.
I mean, even Windows has been doing this for more than 10 years with the WoW (Windows on Windows) implementation: native x64 kernel with both the 64bit win32 stack + a stripped down 32-bit win32 stack (sans support for 8 and 16bit applications). Why go through extra trouble with x32?
Last edited by Sonadow; 04-27-2013 at 04:19 AM.
Use case ?
And that's why you won't see much deployment on main-stream distros. Maintaining a whole 3rd set of libs & arch just isn't worth the performance gain.
Originally Posted by smitty3268
x32 just doesn't make any sense outside the academics and few very specific cases.
For general use, just stick only to 64bits for nearly everything, and maybe install 32bits libs if you really need to run some legacy binary code which isn't available on 64bits. (e.g.: I need it to run Skype).
You'll gain much more than x32's 10% by having a few base libraries (lib-c & co) compiled with support for all of your CPU's vector extensions. And that will require a lot less hassles to maintain as a whole separate set of arch libraries.
Don't except a lot of x32 deployment in general purpose wide audience distros (like Fedora, openSUSE, Ubuntu).
It will mostly be confined to Debian, Gentoo, etc. and used only by a small niche with very specific needs.
Looking around a bit, it looks like Gentoo still doesn't have particularly good multilib support for x32, actually. It seems to cause people build failures due to the installation procedures doing incorrect assumptions... That's a bummer.
Actually this is pretty cool. I dont know why so many people bitch around.
I for instance have a quite new laptop with a Intel core i3 a decent AMD grapics card and 2GB of RAM. So what do I exactly need 64bit for?
People argumented that "I should use 64bit for the small performance gain" (of course there are also some perf decreases as already pointed out, larger address space etc).
Now we have x32 which provides exactly the performance of 64bit and even more (smaller pointer size etc), and now the same people complain "that it is not worth the performance gain"!
C'mon that's lame.
Every performance gain has a cost. The cost in this case is maintaining a whole third set of libraries on a system. That takes space, development hours, qa hours, and the performance gains have not been proven dramatic outside of a few synthetic benchmarks.
Originally Posted by Nuc!eoN
Yes, but 32-bit-only x86 hardware is becoming obsolete, and most for-Linux software is following. A distro could support only the handful of 64-bit programs they must (drivers & those like what I listed), then have x32 for the rest.
Originally Posted by locovaca
This sounds perfect for some old console emulators that seem to work best on x86 systems.
Maybe new hardware, but there are millions upon millions of existing systems that are not 64 bit and no distribution is going to drop x86 and alienate those users.
Originally Posted by snadrus
I remember reading that a significant niche could be in virtual machines. What If you're renting some Xen guest with 512MB ram, or anything < 3.5GB, with some "half a core" guarantee : then that linux server would just run pure x32 and you get a free performance gain - assuming you only had to choose to run a debian x32 installer, or an x32 variant of ubuntu 14.04 or something else. Even the provider would be happy from small electricity savings or cramming one more VM on the server.
Maybe ARMv8 will do the very same thing?
As for desktops it'd be a pain in the ass (no nvidia x32 driver, x32 flash player, and what about just downloading and running a static build of some software) so don't bother, unless maybe you can run full open source everything and install such a clean x32 system on an old computer.
I still run pure i686 on my desktop (I have a x86 CPU, but "only" 3GB RAM, up from 2GB. Also if my computer dramatically fails I can still put the IDE hard drive in some piece of crap Pentium 4; or if my PC doesn't fail I can make an installation on a new hard drive, and reuse the old OS in an old PC)