Announcement

Collapse
No announcement yet.

Ubuntu 17.10 Will Drop The 32-bit Desktop ISO

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • monnier
    replied
    Originally posted by sa666666 View Post
    Oh God, no. Let's just make a move to 64-bit entirely, and just drop the 32-bit stuff. What you're suggesting is a sideways move at best, not an upgrade
    For many/most users, there is no visible difference to whether they're using a 64bit or a 32bit system. The kernel benefits from being 64bit because it needs to manage more than 2GB of RAM, but most applications don't use more than 2GB of RAM, in which case the user will most likely be unaffected by this difference. Admittedly, for some applications, the amd64 instruction set can be slightly more efficient (mostly thanks to its 16 registers), but for other applications the i386 instruction set is slightly more efficient (because pointers use up half the space, which means you can keep many more of them in the cache).

    Most of my machines have been following Debian testing for the last 14 years and are hence still using i386 (tho with an amd64 kernel) because that was my only option back then. I do have a machine running amd64's version of Debian testing, but it's not like I could tell the difference.

    The only application I use which ever gets anywhere near the 2GB limit is Firefox, and when it does get that big it becomes dog slow anyway.

    Sort of like upgrading from IPv4 to v6, and people keep tacking stuff onto v4, or watering down v6. Can we just make a move already? This extreme backwards compatibility is making software development much more complicated than it has to be.
    Most code written nowadays couldn't care less if you're running on an amd64, i386, armhf, aarch64, mipsel, or any other architecture for that matter. Dropping i386 won't make much difference, if any. So it's a very different situation from the IPv4/IPv6 transition. Do you have concrete data to justify your claim of "making software development much more complicated"? AFAIK The only real cost for Ubuntu is in terms of the size of repositories, the extra build infrastructure, the added testing, etc...

    Don't get me wrong: I think Ubuntu's decision makes a lot of sense (e.g. it makes it easier for their users by getting rid of a choice which is not needed nowadays: unless a user really knows she needs i386, she won't suffer from choosing amd64).

    Leave a comment:


  • bastiaaw
    replied
    Hmmmm... this would be for me a reason to abandon Ubuntu for the *second time*. First time was with the mir enforcement.. so I went back to gentoo on my personal machine as compiling doesn't take too long the last couple of years.

    I use my old computers though for my kids to play with; games, internet, homework etc... I see no reason why I should stop using them as they still work. Gnome-ubuntu is what I use on them. Yes bloated games do not work perfectly, but thats the whole point. I do not want my kids using them.

    Something else: How about 3rd world countries... you think they all have 64 bit computers?

    Leave a comment:


  • AndyChow
    replied
    Originally posted by chithanh View Post
    I don't quite understand your request. Of course 32-bit (ILP32) is both theoretically and practically faster than 64-bit (LP64) in pointer heavy code, and in memory bandwidth limited situations[1]. Also not only x86-64 has such an ABI, also ARM has made a 32-bit variant of their 64-bit architecture (aarch64-ilp32)[2] and it shows many of the same performance characteristics of x32 vs. x86-64.

    [1] N. Rauschmayr et al., Evaluation of x32-ABI in the Context of LHC Applications, ICCS 2013 https://doi.org/10.1016/j.procs.2013.05.394
    [2] ILP32 for Aarch64 Whitepaper https://static.docs.arm.com/dai0490/...whitepaper.pdf
    Yes, you are correct. Thank you, I learned something today.

    Leave a comment:


  • chithanh
    replied
    Originally posted by AndyChow View Post
    I'm going to gently challenge you on this. Could you show me a single case where 32-bit userspace is faster than 64-bit userspace, not in practice, but theoretically? 64-bit might take more memory space, as in "used RAM", but I'm not aware of any situation where it's slower on a theoretical basis.
    I don't quite understand your request. Of course 32-bit (ILP32) is both theoretically and practically faster than 64-bit (LP64) in pointer heavy code, and in memory bandwidth limited situations[1]. Also not only x86-64 has such an ABI, also ARM has made a 32-bit variant of their 64-bit architecture (aarch64-ilp32)[2] and it shows many of the same performance characteristics of x32 vs. x86-64.

    [1] N. Rauschmayr et al., Evaluation of x32-ABI in the Context of LHC Applications, ICCS 2013 https://doi.org/10.1016/j.procs.2013.05.394
    [2] ILP32 for Aarch64 Whitepaper https://static.docs.arm.com/dai0490/...whitepaper.pdf

    Leave a comment:


  • FuturePilot
    replied
    Good! 32bit needs to die.

    Leave a comment:


  • AndyChow
    replied
    Originally posted by chithanh View Post
    What? x32 is equal to or faster than amd64 in many cases. If you do not need the 64-bit address space or security benefits, then x32 is the most efficient choice, sometimes by far. It is also why most of the userspace on early 64-bit architectures like mips64 or sparc64 used to be 32-bit.
    I'm going to gently challenge you on this. Could you show me a single case where 32-bit userspace is faster than 64-bit userspace, not in practice, but theoretically? 64-bit might take more memory space, as in "used RAM", but I'm not aware of any situation where it's slower on a theoretical basis.

    I would agree that 32-bit can be more efficient in terms of used memory space, but I can't see how it could theoretically be faster.

    The reason why 32-bit userspace was/is still so prevalent, IMO, is because the code hasn't been re-writen correctly.

    There were cases where 32-bit code was faster, but that was, from what I've seen, because the 64-bit version used 32-bit data-types and then converted them into 64-bit data-types, which obviously is wasteful.

    In my "good old" VB days, I made several tests on data-types, and while every book and reference, even the official Microsoft ones, said that if your integer was never going to go beyond X value, use INT instead of LONG. But testing, LONG was always faster than INT. Eventually, a MS insider did admit to me that internally, all INT's were always converted to LONG's each and every time the variable was called!!!! And it was even worse with BYTE. So every VB coder that has ever tried to be more efficient by trying using INT or BYTE instead of LONG was completely fooled by the internal inefficiencies of a MS hack to make VB work smoothly. And still, to this day, some manuals will tell you "if you are sure it's always going to be between 0 and 255, use BYTE, it's really fast".

    Sure, real code doesn't behave this way, but rushed hacks to turn 32-bit userspace into 64-bit userspace might.

    Leave a comment:


  • Mavman
    replied
    Originally posted by AndyChow View Post

    I've heard talk that the first 128-bit silicon might be produced by 2020... for riscv. You can already emulate it with riscvemu. From the little I understand, it's to have separation of memory management and memory protection, and have truly persistent pointers, potentially over networks or between exascale clusters. It won't be 128 bit flat indexing, but rather 64-bit indexing and 64-bit Object-ID. The 128-bit will be domain wide, not just local. So it can be defined (potential) on a global scale network of exascale clusters.

    And those exascale clusters are going to get built real soon. At least three are planned in Europe, one in the USA (officially, I'm sure DARPA wants a few), Japan, and China.

    To give an idea of the scale, Japan's goal is to have their exacomputer consume less than 30 megawatts. That's roughly the amount of energy needed to power 20 000 american residential homes.
    Great Scott!!! Never heard of it!!! Great tip Thanks!!!

    Leave a comment:


  • chithanh
    replied
    Originally posted by sa666666 View Post
    Oh God, no. Let's just make a move to 64-bit entirely, and jus
    What? x32 is equal to or faster than amd64 in many cases. If you do not need the 64-bit address space or security benefits, then x32 is the most efficient choice, sometimes by far. It is also why most of the userspace on early 64-bit architectures like mips64 or sparc64 used to be 32-bit.

    Leave a comment:


  • RussianNeuroMancer
    replied
    Originally posted by sa666666 View Post
    Sort of like upgrading from IPv4 to v6, and people keep tacking stuff onto v4, or watering down v6. Can we just make a move already? This extreme backwards compatibility is making software development much more complicated than it has to be.
    What is best way to do load-balancing between two IPv6-enabled uplinks on OpenWRT/LEDE? For IPv4 I use mwan3, but what I have to use for IPv6?

    Leave a comment:


  • jpg44
    replied
    The big use for 32-bit images was for running Linux on VMs as a guest. Until pretty recently, many CPUs did not have the hardware assisted virtualization features, ive seen computers made just a few years ago that do not have it. What this means is that a CPU without AMD-V etc cannot run a 64 bit guest OS, even if the host OS is 64 bit. This is because in 64 bit mode, the CPU disables many of the x86 features that were used by virtualization software workarounds on x86-32. Features needed for virtualization were added with AMD-V, but any CPU without AMD-V cannot run a 64 bit guest.

    Leave a comment:

Working...
X