Announcement

Collapse
No announcement yet.

Our Last Time Benchmarking Ubuntu 32-bit vs. 64-bit

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by QuImUfu View Post
    I hope thy keep 32bit support in Lubuntu for a while, and don not artificially limit my Pentium 4 PC to old software, as google chrome did by not providing 32bit builds anymore (Luckily Firefox is still providing 32bit versions). In that case i'd sadly need to switch distro.
    You can use Chromium in 32 bits at least if you want the Chrome flavor of browser.

    Comment


    • #52
      Originally posted by duby229 View Post

      I just want to point out that there is basically like 0 performance gain from those additional GPRs. Most 32bit binaries are compiled for i686. Most 64bit binaries at least have SSE2 optimizations. And that's exactly what you are seeing I'm completely sure of it. A natively compiled 32bit binary will perform very similar to a natively compiled 64bit binary. We live age where 16GB of RAM is common and we are still stuck at 2bytes worth of GPRs... I think it's a damn shame. (And they can for damn sure no longer claim it's a transistor budget problem.)
      Utter nonsense. The additional GPRs exist ONLY in 64bit mode so they don't concern 32bit binaries AT ALL (whether compiled for i686 or any other x86 target). On the other hand SSE2 optimizations affect only floating point operations and no, "most" 64bit binaries don't take advantage of them. While modern compilers have some auto-vectoring capabilities, SIMD optimization still largely requires manual coding. This of course applies to those binaries that actually perform FPU operations, which things like the kernel, filesystem drivers, network stack, compilers, libc etc.... don't. All binaries on the other hand do benefit from 8 more GPRs.

      Incidentally there is nothing like "2 bytes worth" of GPRs. For your information, these GPRs are not 1 bit each, they are 64 bits each, so, if you will, that makes 128 bytes (more in reality since we are talking only about logical GPRs. Intel cores, for example, have 128 physical GPRs). It has absolutely nothing to do with how many GB of RAM you have. In fact, 16 GPRs is in many ways an "ideal" number for a CISC machine. 8 was way too little, especially with no native PIC support, but more would be getting into the territory where the disadvantages outweigh the benefits.

      Comment


      • #53
        Originally posted by Adarion View Post
        While I agree on the matter that 32bit x86 only software is a bit of a burden (Steam, formerly browser plugins), x86_32 is still having a lot of machines out there, machines, that are good enough for the job. And in some cases they are power efficient enough to call it sustainable if these boards stay in service[*]. Use cases where 512... 1024 MiB RAM is just enough, But of course we want to run a patched software stack on them!
        Moreover in terms of performance differences: the question is how many optimizations has the used Ubuntu 32 seen, the compiler flags used etc. and how many are in Ubuntu 64? I highly doubt it's all the sheer 32/64 difference. In the beginning, when AMD introduced the amd64 arch, it was clear that it is 100% backward compatible (which was an awesome move) and that performance difference should be marginal.

        Glad to be on Gentoo, again, where I have the choice to still run my older machines as well as my modern architectures with >4 GiB RAM.[*] E.g. the Geode series (Cyrix mediagx, NS, AMD GX/LX), some VIA C3, C7,... maybe Celerons, Atoms. I'm not talking about P4 heating plates.
        There's a lot of things a compiler can do at the same optimisation level with a lot more registers. I don't think the performance disparity is about optimisation flags. I think it's about a decade of 64-bit software finally taking proper advantage of 64-bit hardware and 64-bit hardware improving to be better 64-bit hardware over that decade. If you were to compare a 64-bit distro to a 32-bit distro 10 years ago on that hardware with those compilers and written by those authors, then yes there was a much smaller performance delta. Actually in the early days of x86-64 the best performance was frequently had from running i686-optimised binaries on an x86_64 kernel, and there's still a handful of applications where this is still true. Apache being one of them.

        Comment


        • #54
          For all the people who are upset about the distros managing 32-bit and 64-bit repos and the impact on quality of the distros, the wasted resources, the waste of volunteer's quality of life... I have this to add:

          Running 32-bit on 64-bit sucks. I think we shouldn't merely drop 32-bit, I think we should drop support for linux32 within x86_64 arch distros. Instead we should write a transpiler for x86 to x64 so that we can run those 32-bit binaries natively on the same 64-bit shared libraries that are already loaded and managed properly by our package managers. If I never have to run ldd to figure out what 32-bit libraries are missing and then google each to see what package they're in, then download them from different distros because they're too old to be in the current one, and run alien to create local packages to install blah blah blah... well that will be the day I say dpkg has finally actually become more friendly than windows DLL Hell, and the whole linux-is-better-than-windows-because-we-have-copy-on-write-shared-executables/libraries mantra will finally be true. At the moment is is a half-truth because we have such a fragmented stack that even 2 programs running the same library can be running 5 different versions of it (I'm looking at you, FFMPEG), aka no copy-on-write-shared-anything.
          Last edited by linuxgeex; 02 October 2017, 08:51 AM.

          Comment


          • #55
            Originally posted by sdack View Post
            No, you don't understand what 32 bit is. I understand what it is. It's an addressing mode and describes the amount of memory a CPU can address as well as the width of its registers. And that's all. It has nothing to do with light bulbs whatsoever. Once you get it right will it no longer be a question of intelligence for you, but you will have intelligence.
            so you don't understand. now I try to explain the fact: 32bit is dead.

            Comment


            • #56
              I wonder, are there any 32-bit only x86 CPUs still available? When were they last available? I know Intel was making some 32-bit only Atoms until what, 2014? There's some embedded company Vortex86 making 32-bit only x86 SOCs- are they still alive? Is AMD still making any 32-bit only parts? AFAIK all Via offerings are 64 bit by now and have been for a while.

              For what it's worth, I support Ubuntu dropping 32 support. Anything still using 32bit x86 CPUs is either specialized/embedded hardware, or people using extremely old PCs that can be replaced by 50% Raspberry Pi. Specialized hardware doesn't need to run latest Ubuntu. People using PCs that old can easily replcae them with a Pi or similar.

              Comment


              • #57
                Originally posted by Azrael5 View Post
                so you don't understand. now I try to explain the fact: 32bit is dead.
                Nothing's dead here. You need to shut up.
                Last edited by sdack; 02 October 2017, 08:49 AM.

                Comment


                • #58
                  Originally posted by coder111 View Post
                  I wonder, are there any 32-bit only x86 CPUs still available? When were they last available? I know Intel was making some 32-bit only Atoms until what, 2014? There's some embedded company Vortex86 making 32-bit only x86 SOCs- are they still alive? Is AMD still making any 32-bit only parts? AFAIK all Via offerings are 64 bit by now and have been for a while.

                  For what it's worth, I support Ubuntu dropping 32 support. Anything still using 32bit x86 CPUs is either specialized/embedded hardware, or people using extremely old PCs that can be replaced by 50% Raspberry Pi. Specialized hardware doesn't need to run latest Ubuntu. People using PCs that old can easily replcae them with a Pi or similar.
                  For starters, Intel is still making new x86_32 platforms, various new SOC boards are coming out and Microsoft has started supporting x86 SOC boards with IOT Core, well, at least the minnowboard for now.

                  I really have no issue with Debian and Ubuntu dropping 32-bit. Their mission statements are about producing the most usable free software platform, not the most compatible or the most archaic. That's up to BSD, lol. There will always be embedded and micro distros thriving on older hardware, ie Porteus, Alpine, DSL, SLiTAZ, Puppy, eLive - some of these are based on Debian or Ubuntu and may end up sharing a forked repo of Debian with a smaller package library in the long run, but they're not all going to just disappear. It may be challenging to keep getting your hands on a fully functional and patched/updated version of Firefox or Google Chrome in another 10 years for a 32-bit linux distro though...
                  Last edited by linuxgeex; 02 October 2017, 09:15 AM.

                  Comment


                  • #59
                    Originally posted by jacob View Post

                    Utter nonsense. The additional GPRs exist ONLY in 64bit mode so they don't concern 32bit binaries AT ALL (whether compiled for i686 or any other x86 target). On the other hand SSE2 optimizations affect only floating point operations and no, "most" 64bit binaries don't take advantage of them. While modern compilers have some auto-vectoring capabilities, SIMD optimization still largely requires manual coding. This of course applies to those binaries that actually perform FPU operations, which things like the kernel, filesystem drivers, network stack, compilers, libc etc.... don't. All binaries on the other hand do benefit from 8 more GPRs.

                    Incidentally there is nothing like "2 bytes worth" of GPRs. For your information, these GPRs are not 1 bit each, they are 64 bits each, so, if you will, that makes 128 bytes (more in reality since we are talking only about logical GPRs. Intel cores, for example, have 128 physical GPRs). It has absolutely nothing to do with how many GB of RAM you have. In fact, 16 GPRs is in many ways an "ideal" number for a CISC machine. 8 was way too little, especially with no native PIC support, but more would be getting into the territory where the disadvantages outweigh the benefits.
                    Let me ask your opinion on something, How many people are still coding in assembly? And how many of them -should- be? It's true I misunderstand some technical concepts, and I'll admit that. But there is no possible way in any hell that you could ever convince me 16 GPR's is enough. Especially when we live in an age with many GB's of RAM and -modern compilers-....

                    EDIT: Saying 16 GPRs is enough is exactly equal to 640k aught to be enough. It was joke then just as much as it is now. 16 GPRs is only enough when you have to manually comprehend the assembly you wrote. And chances are anyway you shouldn't have wrote it.
                    Last edited by duby229; 02 October 2017, 11:59 AM.

                    Comment


                    • #60
                      Originally posted by linuxgeex View Post
                      For all the people who are upset about the distros managing 32-bit and 64-bit repos and the impact on quality of the distros, the wasted resources, the waste of volunteer's quality of life... I have this to add:

                      Running 32-bit on 64-bit sucks. I think we shouldn't merely drop 32-bit, I think we should drop support for linux32 within x86_64 arch distros. Instead we should write a transpiler for x86 to x64 so that we can run those 32-bit binaries natively on the same 64-bit shared libraries that are already loaded and managed properly by our package managers. If I never have to run ldd to figure out what 32-bit libraries are missing and then google each to see what package they're in, then download them from different distros because they're too old to be in the current one, and run alien to create local packages to install blah blah blah... well that will be the day I say dpkg has finally actually become more friendly than windows DLL Hell, and the whole linux-is-better-than-windows-because-we-have-copy-on-write-shared-executables/libraries mantra will finally be true. At the moment is is a half-truth because we have such a fragmented stack that even 2 programs running the same library can be running 5 different versions of it (I'm looking at you, FFMPEG), aka no copy-on-write-shared-anything.
                      You just described WoW64. It's not what you think it is.

                      Comment

                      Working...
                      X