Announcement

Collapse
No announcement yet.

32-bit vs. 64-bit Ubuntu 13.04 Linux Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by dalingrin View Post
    This is true, however, 64 bit OPs require explicit use and they are rarely used beyond cryptography applications.
    I believe cryptography and some scientific workloads make heavy use of 64-bit adds and will benefit greatly from a 64-bit CPU/ALU. I also believe some ray-tracing apps use 64-bit instructions (correct me if I'm wrong).

    Comment


    • #22
      Originally posted by kobblestown View Post
      Mike, on Linux you really want to use 64-bit kernel even physical RAM of 1GB (or larger, of course).
      I must say that, at work, I even use 64-bit for virtual machine with 400-512 MB of RAM.

      Originally posted by kobblestown View Post
      Due to the way Linux partitions the virtual memory space, there's only 960MB in the kernel-reserved area to map physical memory.
      I'm pretty sure it's the same for Windows, since I guess this is a x86 limitation.

      Originally posted by schmidtbag View Post
      That only happens if you use a discrete GPU. With an integrated GPU, sure you could still get less than 4GB of usable system memory but you don't actually lose access to the memory, it's just redistributed.
      I have a PC (at work, still) with a Intel board (i815 or 915 chipset, I don't remember) running a CentOS 6 64-bit. The motherboard completely lose the top 768MB of the 4GB. CentOS only see 3.2GB. I have checked the manual of the board to be sure. The POST indicates this too. And there is no discrete GPU, the IGP has only 8 or 16MB allowed for him.

      Originally posted by vk512 View Post
      [...] now stay with 32-bit on my laptops. On the whole the fact that it consumes less RAM [...]
      For sure! Since 32-bits limits you to 2 GB per process
      Last edited by whitecat; 25 April 2013, 01:13 PM.

      Comment


      • #23
        You need support for memory remapping for full 4 GB or more RAM. If your BIOS does not enable it by default look for an option or check if a BIOS update solves the issue. Even when you don't need the speed bonus for some apps you need to use 64 bit in order to boot via UEFI. Basically you can start a 32 bit kernel with a 64 bit UEFI as well but with my boards i lost SMP and ACPI support - not even poweroff works then. So better boot only 64 bit kernels with UEFI.

        Comment


        • #24
          I don't bother to think so much about the benefits of 32bit vs 64bit mysef actually. To me, the rule is simple:

          Hardware supports x64 and drivers ported over? Use 64bit OS.
          Hardware supports x64 but drivers not ported over? Use 32bit OS.
          Hardware supports x64 and drivers ported over, but need to run 32bit applications? Use 64bit OS, install 32bit libraries.
          Mainboard firmware uses any UEFI implementation? Definitely use 64bit OS.

          Comment


          • #25
            Originally posted by Kano View Post
            You need support for memory remapping for full 4 GB or more RAM. If your BIOS does not enable it by default look for an option or check if a BIOS update solves the issue.
            I think I have seen something like this. I will toggle it to try!

            Comment


            • #26
              Originally posted by whitecat View Post
              I'm pretty sure it's the same for Windows, since I guess this is a x86 limitation.
              I've read somewhere that Windows is different in that respect but I don't remember the details. It's much clearer for Linux. It's essentially a performance optimization. When userland calls into the kernel, the kernel might need to access sth in physical memory. Because of that, the approach chosen in Linux is to map the physical memory in the kernel area region of each process' virtual address space. So the kernel has access to physical memory without a switch to some kernel-specific virtual address space. The problem is that the kernel has to share the address space with userland. This is known as the memory split. The default 3G/1G split leaves 3GB to user processes and 1GB to the kernel. The kernel needs some memory for its own working which is subtracted from 1GB to arrive at 960MB (IIRC) mappable physical memory.

              However, the kernel can be compiled with a 2G/2G split. Then there's more space for mapping physical memory in the kernel region - 960+1024 = 1984MB but consequently less space for the user process. I think this is a better option for 32-bit desktop systems but I'm not aware of any distribution that ships with such kernel. At least I think it needs specially compiled kernel. Windows has this as a boot-time parameter but maybe it selects different kernel to boot, like for PAE.

              There's an important point to make in that such memory split is not really manadatory. Some time ago there were some kernel patches (I think they never made it into the mainline kernel) that implemented a 4G/4G split. But a different kind of split. The idea there is to use distinct virtual memory spaces for userland and kernel. User's processes can use the entire 4GB allowed by the i386 architecture. When a call to the kernel is made the processors switches to the separate kernel virtual memory space. The problem is that this invaliadates all cached page-address translations and flushes the processor TLBs which is big hit on performance. But it allows almost 4GB of phisical memory to be mapped in the kernel address space. It's a compromise whose outcome depends on the workload although in most cases the 3G/1G split seems to win. I think recent intel processor tag the TLB entries with a process identifier so the TLB flush is no longer necessary. That means that the 4G/4G should be cheaper to implement today. But why bother? Amd64 solves all these problems rather neatly. Bringing better performance into the mix.

              Comment


              • #27
                Are we really still having this argument?

                Look, this argument is such a Dead Horse. The sentence was handed-down in 2003, the horse was dead in 2006, and even the *worms* should be done with it in 2013.

                If you're not totally pressed for RAM (like, 512MB or less) then 64-bit is the way to go. Not because of any dramatic performance difference one way or another, but because it's a LOT 'cleaner' of an architecture.

                That said, there are other benchmarks that show dramatic improvements on x64, like OpenSSL, certain file systems, and some compression algorithms.

                32-bit software has worked well on 64-bit Linux for a LONG time. At the very least, people should be using 64-bit kernels with 32-bit userland, just to avoid the messy legacy memory model. I'm looking forward to x32 because then there will really be no excuses, and the few cases where 64 bits are bloaty overkill will be gone.

                Comment

                Working...
                X