Announcement

Collapse
No announcement yet.

Ubuntu 12.10: 32-bit vs. 64-bit Linux Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Ubuntu 12.10: 32-bit vs. 64-bit Linux Performance

    Phoronix: Ubuntu 12.10: 32-bit vs. 64-bit Linux Performance

    In past years on Phoronix there has been no shortage of 32-bit vs. 64-bit Linux benchmarks. Assuming you don't have a limited amount of RAM and under memory pressure, 64-bit distributions tend to be much faster than the 32-bit versions. However, some Linux users still often wonder whether they should use the 32-bit or 64-bit version of their distribution even when on 64-bit hardware. So with that said, here's some more 32-bit vs. 64-bit benchmarks of Ubuntu 12.10 with the Linux 3.5 kernel.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Hi Michael, I just thought of an improvement to your graphs: Instead of just writing "More is Better/Less is Better" you could make the "winning" graph a slighly different color, for instance brighter and more saturated. That would give the user instant feedback on which one won a test (when numbers aren't as important).

    Comment


    • #3
      Is it theoretically possible for 32 bit to be superior to 64 bit for some workloads, and if so, what kinds? Or, if not, should we file bugs on software packages whose performance regresses when compiled for 64 bit?

      Comment


      • #4
        64bit in itself will only help an algorithm if data structures larger than 32bit are used, or could be used to increase performance.
        This is especially true for applications that need more than 4GB of RAM (forget PAE, it is slower).

        Now what happened to the x86 world is that with the introduction of AMD64 (x86-64 extension) the registers did not only get wider but doubled!
        Integer registers and SIMD registers. The latter recently even quadrupled (AVX).
        Apart from that the 64bit mode offers new ways of addressing code, relative to the instruction pointer.
        And maybe more I'm not aware of.

        Also there have been loads of additions to the x86 instruction set since AMD64 was introduced in 2003 (Intel implemented it a bit later). I don't know if some of these additions can only be used in 64bit mode. If so, they add to the potential performance gain for 64bit software.

        But if your algorithm just can not be improved via wider registers more registers or other voodoo, it is probably faster on a 32bit CPU. This is because full 64bit operations take probably more time to compute (transistor latency) and compressing 32bit (and often much smaller) numbers such that they do not waste 80% (or so, on average) of the available 64bit registers/computational units is pretty slow, if even attempted.

        Therefore I conclude that if programmed wisely or at least put through a great compiler most software should be faster on a modern 64bit CPU. But there are algorithms that just don't scale. Maybe those could be exchanged for better ones, maybe not.
        Filing a bug could make the programmer think about optimizations. But don't spam with those ;-)

        Comment


        • #5
          Originally posted by enteon View Post
          64bit in itself will only help an algorithm if data structures larger than 32bit are used, or could be used to increase performance.
          This is especially true for applications that need more than 4GB of RAM (forget PAE, it is slower).

          Now what happened to the x86 world is that with the introduction of AMD64 (x86-64 extension) the registers did not only get wider but doubled!
          Integer registers and SIMD registers. The latter recently even quadrupled (AVX).
          Apart from that the 64bit mode offers new ways of addressing code, relative to the instruction pointer.
          And maybe more I'm not aware of.

          Also there have been loads of additions to the x86 instruction set since AMD64 was introduced in 2003 (Intel implemented it a bit later). I don't know if some of these additions can only be used in 64bit mode. If so, they add to the potential performance gain for 64bit software.

          But if your algorithm just can not be improved via wider registers more registers or other voodoo, it is probably faster on a 32bit CPU. This is because full 64bit operations take probably more time to compute (transistor latency) and compressing 32bit (and often much smaller) numbers such that they do not waste 80% (or so, on average) of the available 64bit registers/computational units is pretty slow, if even attempted.

          Therefore I conclude that if programmed wisely or at least put through a great compiler most software should be faster on a modern 64bit CPU. But there are algorithms that just don't scale. Maybe those could be exchanged for better ones, maybe not.
          Filing a bug could make the programmer think about optimizations. But don't spam with those ;-)

          Dear enteon,

          very nice post with great info. love it.

          my concern is that we (specially i) develop and write applications in high-level languages.
          so i generally don't concern with the no of 'registers' or 'integer size' etc.
          isn't it the concern of the compiler that i am using ?
          shouldn't it optimize the binary for me for 32bit and 64bit ?

          Comment


          • #6
            Yeah sure, 64bit is better in some aspects than 32bit...but the opposite is also true.

            In the end i will prefer always 32bit for compatibility issues with games....i had a bad enough experience with 64bit in a recent past to even consider it any time soon...

            Comment


            • #7
              Originally posted by ayandon View Post
              my concern is that we (specially i) develop and write applications in high-level languages.
              so i generally don't concern with the no of 'registers' or 'integer size' etc.
              isn't it the concern of the compiler that i am using ?
              shouldn't it optimize the binary for me for 32bit and 64bit ?
              I'm no expert on this, but I'd say it depends on your language and time available. In java you have almost no chance of optimizing for different hardware. C on the other hand allows you to do that. Both use compilers that are supposed to optimize. But creating a good compiler is one of the most difficult things to do for a programmer. Therefore you may want to not rely on the compiler for optimization, if you have the time and the time invested is less than the saved runtime.

              But still there are algorithms that no compiler can optimize. It's your responsibility to choose the best algorithm for the data you are to compute. No compiler can ever do that (perfectly).
              This includes (I believe, but don't have experience in it) the usage of x86 extensions, SSE units, AES units, even GPGPU (not x86 of course). You have to implicitly use these features, no compiler will add them automagically.

              But just like good software engineering pushing such things through management may be hard



              Originally posted by AJSB View Post
              In the end i will prefer always 32bit for compatibility issues with games....i had a bad enough experience with 64bit in a recent past to even consider it any time soon...
              Of course if your application can't do 64bit then all hope is lost. But blame the producer, not the CPU

              Comment


              • #8
                Originally posted by AJSB View Post
                Yeah sure, 64bit is better in some aspects than 32bit...but the opposite is also true.

                In the end i will prefer always 32bit for compatibility issues with games....i had a bad enough experience with 64bit in a recent past to even consider it any time soon...
                This has nothing to do with 32 vs. 64 bit but with your distro, x86_64 cpus can natively execute 32bit code, the rest is the job of your distro to get right.

                Originally posted by enteon View Post
                I'm no expert on this, but I'd say it depends on your language and time available. In java you have almost no chance of optimizing for different hardware. C on the other hand allows you to do that. Both use compilers that are supposed to optimize. But creating a good compiler is one of the most difficult things to do for a programmer. Therefore you may want to not rely on the compiler for optimization, if you have the time and the time invested is less than the saved runtime.

                But still there are algorithms that no compiler can optimize. It's your responsibility to choose the best algorithm for the data you are to compute. No compiler can ever do that (perfectly).
                A compilers job is not to replace your bad algorithm with a good one but to create good machine code.

                Originally posted by enteon View Post

                This includes (I believe, but don't have experience in it) the usage of x86 extensions, SSE units, AES units, even GPGPU (not x86 of course). You have to implicitly use these features, no compiler will add them automagically.
                Not quite true either ... compilers do very well use SIMD extensions when producing code (auto vectorization) and for x86_64 they are free to use SSE2 because it is part of the ABI (runs on all CPUs). GPGPU is a different story.

                As for languages like Java, Javascript and .NET it is the JIT compiliers job to produce efficient code. When working with such languages just make sure that your code uses sane algorithms ... you are not supposed to worry about machine level stuff when using them.

                Comment


                • #9
                  Well thanks for supporting my conclusions (and for clarifying the automatic SIMDification)

                  Comment


                  • #10
                    Reading up on the X32 ABI gives you another data point -- X32 exposes the AMD64 architectural extensions (eg more registers for the compiler to use) while staying with 32-bit pointers in most cases. If you don't need an address space that requires >32-bit pointers it gives you "best of both worlds".
                    Last edited by bridgman; 14 October 2012, 11:33 AM.
                    Test signature

                    Comment

                    Working...
                    X