Announcement

Collapse
No announcement yet.

Arch Linux Preparing To Deprecate i686 Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • nitrofurano
    replied
    Originally posted by sireangelus View Post

    old code should die. it eats up testing and bufixing resources that could be better focused. We are talking about literally doing every single job twice per release on every platform. Also, between ksm and terabyte ram servers, your argument is pratically void. WIth ksm if you deploy hundreds of 512mb vms on a server, you would notice that if you use the same SO base on the same server with ksm the non-deduplicable ram be virtually 0
    yes, i see the point... if "old code should die", why are we wasting time developing whatever anyway? O.o

    Leave a comment:


  • sireangelus
    replied
    Originally posted by Weasel View Post
    Fuck off with this "Hopefully we'll see other Linux distribution vendors do a similar maneuver this year!"

    What do you gain out of it? Should I say all non-x86 ISAs should die and cheer if it hypothetically happens just because I never touched an ARM device?

    32-bit is important especially for small VM farms because it wastes less RAM. Not just RAM, but less disk space too. It's not about "old hardware" only, it's about a CHOICE. A choice that is taken away. I couldn't care less if they test it on real ancient hardware or not! That's beside the point! I want the option to download Ubuntu (and other distros) in 32-bit ISO, even with zero technical support, to use it in my VMs.

    64-bit code tends to be larger. This isn't just the pointer size, the instruction ENCODING itself is larger. 99% of all Windows programs which come in both 32-bit and 64-bit show this difference. On Linux the difference is smaller because it's artificial: they compile with worse settings for 32-bit than 64.

    You guys speak of more registers, you realize those registers require a REX prefix to encode, which wastes 1 byte per instruction, right? You think 1 byte is not much? Think about this. x86 is a CISC architecture. Most commonly used instructions are between 1 and 3 bytes. Adding 1 REX prefix byte to this basically makes them anywhere from double the required space to 33% more. Yeah, more registers help because accessing the stack wastes a few bytes as well (in terms of code size) but it's not enough. 32-bit also can encode inc/dec instructions with 1 byte instead of 2/3 like 64-bit mode.


    So why must I be forced to waste my RAM if I use a farm of 512MB RAM VMs??? Because some people "find it cool" to have stuff deprecated? Those people probably don't even use VM farms so WHY THE FUCK do you care if x86 lives on or not? IT'S NOT FOR YOU ANYWAY. I'm so sick of seeing people cheering for this kind of bullshit decision, when it doesn't affect them.

    The other point is software preservation. People watch old movies all the time yet you think using old software or playing old games deserves to be taken away? Yes sometimes you need a VM for really old games. Where the fuck would you get the OS from if it's not available for download anymore?

    I'm speaking in general, not just Linux.

    Rant off.
    old code should die. it eats up testing and bufixing resources that could be better focused. We are talking about literally doing every single job twice per release on every platform. Also, between ksm and terabyte ram servers, your argument is pratically void. WIth ksm if you deploy hundreds of 512mb vms on a server, you would notice that if you use the same SO base on the same server with ksm the non-deduplicable ram be virtually 0
    Last edited by sireangelus; 09 November 2017, 05:47 AM.

    Leave a comment:


  • Weasel
    replied
    Fuck off with this "Hopefully we'll see other Linux distribution vendors do a similar maneuver this year!"

    What do you gain out of it? Should I say all non-x86 ISAs should die and cheer if it hypothetically happens just because I never touched an ARM device?

    32-bit is important especially for small VM farms because it wastes less RAM. Not just RAM, but less disk space too. It's not about "old hardware" only, it's about a CHOICE. A choice that is taken away. I couldn't care less if they test it on real ancient hardware or not! That's beside the point! I want the option to download Ubuntu (and other distros) in 32-bit ISO, even with zero technical support, to use it in my VMs.

    64-bit code tends to be larger. This isn't just the pointer size, the instruction ENCODING itself is larger. 99% of all Windows programs which come in both 32-bit and 64-bit show this difference. On Linux the difference is smaller because it's artificial: they compile with worse settings for 32-bit than 64.

    You guys speak of more registers, you realize those registers require a REX prefix to encode, which wastes 1 byte per instruction, right? You think 1 byte is not much? Think about this. x86 is a CISC architecture. Most commonly used instructions are between 1 and 3 bytes. Adding 1 REX prefix byte to this basically makes them anywhere from double the required space to 33% more. Yeah, more registers help because accessing the stack wastes a few bytes as well (in terms of code size) but it's not enough. 32-bit also can encode inc/dec instructions with 1 byte instead of 2/3 like 64-bit mode.


    So why must I be forced to waste my RAM if I use a farm of 512MB RAM VMs??? Because some people "find it cool" to have stuff deprecated? Those people probably don't even use VM farms so WHY THE FUCK do you care if x86 lives on or not? IT'S NOT FOR YOU ANYWAY. I'm so sick of seeing people cheering for this kind of bullshit decision, when it doesn't affect them.

    The other point is software preservation. People watch old movies all the time yet you think using old software or playing old games deserves to be taken away? Yes sometimes you need a VM for really old games. Where the fuck would you get the OS from if it's not available for download anymore?

    I'm speaking in general, not just Linux.

    Rant off.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by nitrofurano View Post
    why is this huge hurry needed? why not stop supporting 256bit architectures next year as well? why should gnu/linux distributions feed such consumerism and planned obsolescence? this situation is really totally sickening...
    One of the entire points of having multiple distros is to allow each to do their own thing and focus on areas that are important to them. If you're going to have a single distro do everything and focus on everyone, then why do all the others even exist?

    Also, we have very different definitions of "huge hurry." Even Microsoft, with their higher reliance on 32-bit proprietary 3rd party apps, have stopped supporting 32bit architectures on some lines of their OS.
    Last edited by smitty3268; 04 February 2017, 01:22 PM.

    Leave a comment:


  • nitrofurano
    replied
    why is this huge hurry needed? why not stop supporting 256bit architectures next year as well? why should gnu/linux distributions feed such consumerism and planned obsolescence? this situation is really totally sickening...

    Leave a comment:


  • Hi-Angel
    replied
    Originally posted by caligula View Post
    There are few generations of Atoms. The first ones have pretty weak CPUs with in-order execution, which is comparable to weaker ARMs. The later ones have OoO execution (since Bay Trail?).
    Well, from the Wiki table, after Bay Trail was Cherry Trail, which supports 64 bit as far as I can see. And Bay Trail itself is unsupported (poorly supported?) CPU by Intel, so there's no reason to use it anyway. I mean, you could consider Bay Trail dead even before Archlinux claimed i686 to be deprecated.

    Leave a comment:


  • geearf
    replied
    Originally posted by JGC_ View Post
    i686 is not "optimized" and "fast" anymore.
    Would you say that x86_64 is now?

    Leave a comment:


  • vsteel
    replied
    Originally posted by Adarion View Post
    For those who just blare "yeah, kill non-64bit-x86 with fire": You have no clue. Or you are too young. Or both.
    There are still enough machines out there doing a fine job "even" with a "lowly" 32bit x86 CPU. Automates, embedded systems, machines in private households, boxes driving expensive measurement devices in laboratories... It's good that there are still some who will support it.
    If you are older and around those expensive lab machines and industrial equipment, then you know that on those boxes software doesn't get updated. It chugs along quietly, I am still around equipment that is chugging along on windows NT 3.5.1 and Solaris 2.6 No reason to upgrade, it does what it is supposed to do and no one wants to take a chance it breaks.

    If you have nostalgia or need an older box, it isn't like the current versions of software are going to stop running.

    Leave a comment:


  • caligula
    replied
    Originally posted by leipero View Post
    It's about time, Google dropped support for 32 bit systems, let's be honest here, last desktop CPU's from AMD was Atlhon XP, whole socket 754 is 64 bit, I think even Semprons are (Sempron 64?), if we look in objective way, those CPU's are pretty much unusable for even simple web based tasks. On intel side, Atom? I don't know how capable are the fastest CPU's from that line, but i doubt it is much above Athlon64 scale.

    Core 2 Due are still very capable CPU's, for basic web based tasks are perfect, they support 64 bit architecture, so there's no reason to make those CPU's blacklisted, it's unlogical and it would be equivalent of shooting yourself in the foot.
    There are few generations of Atoms. The first ones have pretty weak CPUs with in-order execution, which is comparable to weaker ARMs. The later ones have OoO execution (since Bay Trail?). Still, the power comes from efficient co-processors (iGPU, video decoders, accelerated instructions, e.g. SSE, AVX, AES), not from the basic execution engine. So the platform pretty much needs good drivers since it can't decode videos in software.

    I'm pretty sure that old ~100W TDP high end CPUs (even 32-bit single cores) can be faster than Atoms especially when overclocked and used with modern external GPUs. A 5W SoC can't really compete with a 100W CPU and a 200W discrete GPU, both having dedicated wide multi-channel memory buses. Especially now that GUI toolkits are moving towards HW acceleration and all video is accelerated, there's little need for efficient CPUs outside games and stupid JS heavy web sites.

    Leave a comment:


  • leipero
    replied
    It's about time, Google dropped support for 32 bit systems, let's be honest here, last desktop CPU's from AMD was Atlhon XP, whole socket 754 is 64 bit, I think even Semprons are (Sempron 64?), if we look in objective way, those CPU's are pretty much unusable for even simple web based tasks. On intel side, Atom? I don't know how capable are the fastest CPU's from that line, but i doubt it is much above Athlon64 scale.

    Core 2 Due are still very capable CPU's, for basic web based tasks are perfect, they support 64 bit architecture, so there's no reason to make those CPU's blacklisted, it's unlogical and it would be equivalent of shooting yourself in the foot.

    Leave a comment:

Working...
X