improved greatly now. However, its still mostly in a broken and unusable state for things the readers of
this forum would require. One big problem, is that yasm doesnt yet produce x32 object files. This breaks
most of the things that require an assembler for tuned code; think video codecs and media encode/decoders.
Other more mundane projects still have not addapted to this new format yet. Gentoo is a rolling release, so
I have observed its progress in my chroot x32 image, and it seems to keep improving, but I have not noticed
any dramatic savings of space (there are some) or relevant benchmarks (the things I would like to bench
dont work yet). I suspect it will mostly be relevant to the embedded space for some time. However, x32
binaries can run along side the already co-existing 32bit x86 and 64bit native applications, with the main
cost being the space to store their own set of libraries. If you had to choose between an x32 application and
an x86 one, I dont see why you would not choose x32 as long as it works. The question now is, how long
if ever, will x32 be made to work...
I guess, there is demand for different version:
a) Limited hardware
There are still 32bit CPUs around (old machines, various ARM architectures, other special hardware).
Those DO REQUIRE 32bit code.
b) Heavy load 64bit applications
There are some 64bit applications (CAD, database, video editing, some games, servers, ....) which can really benefit from the 64bit setup. Hence they should or even MUST RUN in 64bit mode.
c) Rest and/or many desktop use cases
A large part of installations does not really benefit from the 64bit pointers. They could however benefit from a large performance gain through the new extended instruction set.
For those, an enhanced / optimized 32bit system (x32 ABI) should deliver the best performance.
In the (hopefully near) future, when the x32 has been settled, there might be an intelligent installation:
When somebody chooses a 32bit installation and has a 64bit architecture, the system automatically selects the x32 ABI version for best performance.
If there is any benefit at all in x32, outside a select few synthetic benchmarks and bandwidth-starved mobile architectures.
Various tests are showing compiling kernels, heavy en- and decoding of media files, heavy video editing, large database applications and some gaming do actually benefit from the 64bit pointers.
Interestingly, those are the tests that usually are performed in benchmarks. (Also look at the phoronix tests.)
On the other hand, tests have shown that many applications used for desktop environments (office apps, mail programs, browser, etc., when not heavily utilizing x86_64 extensions or memory intensive) are usually running faster in 32bit than 64bit.
So there are many use cases, where 64bit systems makes more sense and therefore should be choosen
(preferred over a 32bit version)!
On the other hand, there are also many use cases, which should actually benefit from the reduced footprint of 32bit pointers (when allowed to use the new x86_64 instruction sets) and hence be choosen (preferred over a 64bit version)!
My point is: It depends on the use case!
However, since there are no testable x32 systems available, one has to wait until then.
And then, I hope phoronix is running some benchmarks for different use cases.
So one can then see, which system makes more sense for which use case.
Last edited by rgloor; 10-15-2012 at 07:25 AM.
x32 fanatics should read this...
"Doubts" means... doubts, not something you should start from to define something a myth.The new x32 ABI has proven to be faster. Not really; what we have right now are a few benchmarks, published by those who actually created the ABI, Of course you’d expect that those who spent time to set it up found it interesting and actually faster, but I honestly have doubts about the results
Still, saying that the approach may be wrong doesn't imply that that approach is a myth.From one side it is theoretically correct that you’re going to have smaller data structures, which means you can make better use of the data cache (not of the instruction cache, be sure!) — but is this the correct approach?
this was libc, see that x32 loses just in the exec field.Code:exec data rodata relro bss overhead allocated filename 1239436 7456 341974 13056 17784 94924 1714630 /lib/libc.so.6 1259721 4560 316187 6896 12884 87782 1688030 x32/libc.so.6
I have read the article, as well as the comments.
Have you read the comments/discussion as well? Especially toward the end!
From what I read from various sources, there is a legitimate chance, that x32 might be beneficial in some use cases. And my statement is:
- One has to wait for the real deal: A released distribution with x32 properly implemented (might be only at the 2nd iteration).
- Then run several benchmarks for several use cases.
And only then we will see, which version is better suited for which use case. Point.
If that is fanatics by your definition, then you might call me fanatic.
The data is also coming from a synthetic test, not from an actual overall system usage, and if you have any clue about benchmarks you know that the numbers can easily lie out of their teeth!
And in your benchmark:
x86-64 : 114 %
x32 : 117 %
Awesome. A new ABI for that?
x32 has 9 years late. For desktop or servers nobody will use x32 because x86-64 is already here.
And you forgot that proprietary vendors (Nvidia, AMD, Adobe, Microsoft) will certainly never support x32 because it's a waste of time.