Announcement

Collapse
No announcement yet.

Ubuntu MATE / Studio / Budgie All End Their 32-bit ISOs For New Releases

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ferry
    replied
    Originally posted by Faalagorn View Post
    Interesting – do you have any further reads on that? I own a dirt cheap (bought some time ago for equvalent of <$30) Baytrail Atom tablet that I installed 64-bit Linux on (Atom Z3735G; unfortunately with only 1GB soldered RAM).
    I did some benchmarking with base64 and crc32c codecs here https://github.com/htot/base64 and here https://github.com/htot/crc32c. You can find Intel documentation here: https://software.intel.com/sites/def...ion-manual.pdf in section 16.2.1.2. The issue in ran into is related to exceeding 3 byte prefix/escape on certain x86_64 instruction like crc32q causing 3 - 6 cycle penalty. Interestingly there is a LSD (loop stream detector, 16.2.1.4) which eliminates the penalty (so having the compiler optimizer unroll loops will actually be much slower then keeping the loop so the LSD will kick in). I found that the LSD only kicks in after a certain number of loop iterations (I could not find documentation on the threshold, but Intel engineers suggested a 100 or so). So for crc32q this gives a optimization opportunity for very long buffers.

    On 32bits there is 1 prefix less, so all these complications don't happen. You loose a factor 2 in speed which is less than 3 - 6 cycle penalty.

    Leave a comment:


  • Faalagorn
    replied
    Originally posted by ferry View Post
    OTOH it is true on Atom Silvermont (certain Baytrail) that certain 64b instructions run 3x slower than their 32b equivalent.
    Interesting – do you have any further reads on that? I own a dirt cheap (bought some time ago for equvalent of <$30) Baytrail Atom tablet that I installed 64-bit Linux on (Atom Z3735G; unfortunately with only 1GB soldered RAM).

    Leave a comment:


  • ssokolow
    replied
    Originally posted by leipero View Post
    (there was no 16-bit versions of Windows afaik, even tho 16-bit DOS kernel and layers were used).
    I'd have to double-check the system requirements, but I'm pretty sure that "Standard Mode" in versions of Windows prior to 3.11 for Workgroups (which made 386 Enhanced Mode mandatory) was developed for running Windows on 286 CPUs, where protected-mode operation was more or less never used because you could only drop back to real mode by resetting the CPU.

    Leave a comment:


  • kneekoo
    replied
    Originally posted by slacka View Post
    Why don't people understand that newer isn't necessarily better. My gorgeous 16:10 Precision laptop has amazing build quality and a screen that you can't buy anymore. With PAE I can use all 4GB of Ram even though my Core CPU is 32-bit only. Why push people to generate e-waste, when many 32-bit machines are still more than powerful enough for their users needs? For basic web browsing and SSH'ing that CPU is still overkill.



    It's not just low RAM consumer hardware. My company saved terabytes of RAM by running it's low RAM servers 32-bit. Delaying the transition to 64-bit allowed them to skip at least 1 HW upgrade cycle.
    I know exactly what you mean. I am actually pissed off at how short-sighted the distro makers are. It makes them look like newbies to technology. I actually touched the economical advantage of 32-bit in the data centers, in this article.

    Leave a comment:


  • Artemis3
    replied
    All you need to do is switch to a distro still supporting 32 bit, or stick to the LTS until its over. I switched my 32 bit netbook to Voidlinux for that very reason.

    Leave a comment:


  • Vistaus
    replied
    Originally posted by calc View Post

    You might be surprised at how much memory a few open modern webpages take. Facebook, Google Docs, etc take huge amounts of memory. I wouldn't recommend anyone get anything less than 8GB RAM in a system. Even the oldest system I still have, which is 10 years old, has 8GB in it, which was the most it could take. My newer systems all have 32GB and I'm frequently using over half of that. Pretty much any system other than the cheapest at somewhere like Wal-Mart has at least 8GB in it, a friend even gave me a system they didn't want anymore that had 12GB.
    You missed my point. I was talking about *casual* users, who do nothing more than opening up their e-mail application, Facebook, etc. and we were talking about Linux. My laptop has a weak 2015 Intel Celeron CPU and 4GB of RAM and with Solus Plasma Edition it's flying and I never run out of memory. And I even do a bit more than casual users. So 4GB is plenty for casual users.

    Leave a comment:


  • DrYak
    replied
    Regarding the whole bittage discussion :

    - the 16-bit to 32-bit mainstream PC transition took quite some time, because you needed completely new 32bit OSes built for that hardware (old-stuff like MS-DOS was designed with 16bits segmented memory in mind), and no company wanted to throw money building OS for PCs that weren't popular : it took some time between the first machine featuring 386 CPU and the time when 32bits were popular enough in PCs to make worthwhile the development of OSes leveraging protected 32bit modes.

    - the 32-bit to 64-bit transition on PC running Linux happened virtually *over night*. By then there was already quite some experience running Linux on various 64bit architecture on big servers (e.g.: 64bits sparc and mips), so most of the software found in a typical linux distro have already been somewhat tested on 64bits systems. And AMD had spend effort providing AMD64 testing hardware and simulations. So by the time AMD64 processor hit the shelves, you had already a couple of AMD64-ready distro available for downloading (e.g.: Suse Linux 8.2 had a x86_64 variant released on the download servers. It was a bit buggy, some software not having been 100% tested in 64bit compiles, but generally worked).
    And in essence, 64-bit unix systems are "just" 32-bit systems with wider pointers - they more or less use the same "flat" memory system, there isn't that much new conceptually. (Nothing like the segmented 16-bits to flat 32 bits redesign that microsoft needed to do between old DOS and more modern OSes).
    Linux distro "basically" needed to "just recompile" the source-code with 64bit memory pointers instead of 32bits and that's roughly it. (the devil being in the small details with all the bugs here and there that assumed pointers will always be 32bits max).
    As an example : I was litterally able to install Suse 8.2 AMD64 right after I brought the CPU back from the shop.

    - the 32bits to 64bit transition on PC running *Windows* took ages because : the most critical reason to upgrade only become very apparent when you need to have more than 4GiB of RAM (PAE not being available on non-server/business variants of Windows).
    For the rest *Legacy* was the thing that dragged windows back.
    running 16bits code on older x64_64 is complicated, and there was still quite some oldies in the windows world, specially in the corporate world.
    32bit software doesn't benefit as much running on 64bits (can't use the new registers, etc.)
    and as most of the software is closed source in the Windows world, there's no access to source that you could recompile or port to 64bits.
    (in fact the opposite, even after XP64 got released and after Vista was available for x86_64 cpus, you often had to *stick* to 32bits browsers due to obscure plug-in compatibility)

    - 64bits to 128bits doesn't make much sense on consumer hardware.
    64bits gives more distinct addresses than there are grains of sand on earth. There's no way you're going to have that much memory packed in consumer hardware in the close future. To the point that most 64bits consumer system use internally a 48bits only address space that is more than enough to map everything a consumer software use (that represent several peta-bytes-worth of address to map to anything needed, be it RAM, swap, peripherals, memory-mapped storage, etc.)
    128bits systems only make sense on some very specific systems like large-scale clusters that need to have multiple-exa-bytes of storage mapped into address space, and still needs a few extra bits of headroom in the address space.
    There are no technical barrier in doing 128bit pointers. There's just a practical barrier, there's currently not enough thing to map into that many different addresses.

    - The bittage it self is a bit a misnomer on PCs (unlike the 8bits / 16bits / 32bits systems of old time) because it actually only designates the size of pointers. There are a lot of different things that use different bit width : memory bus then to be wider than that, specially on graphics cards. Floating point registers can be wider than 64bits internally (even the old 387 used 80bits internal representation). etc.

    - The thing that actually makes sense is doing computations on more than 64bits at a time. And that is extremely useful even now. Either when manipulating extremely large number (see cryptography. 4096bits RSA key are a thing), or manipulating lots of data in the same go.
    Which is actually what SIMD extensions have been doing for ages. SSE are 128bits, AVX are 256, AVX512 is starting to become an actual thing on Intel hardware (= they can manipulate 64 8bit bytes in a single go).
    Graphics cards have been manipulating extremely large vectors (due to parallel processing). for quite some time.

    So in a way, we have already passed the 128bits transition on consumer PCs for quite some time... if you only look exclusively at the data manipulated in computations (SIMD, etc.). Pointers and address space are sticking to 64bits max because there's no need to go beyond in the foreseeable future.

    Leave a comment:


  • leipero
    replied
    Vistaus Yes, it is resonable to assume when first market/user oriented 128-bit PC comes, 32-bit would be like 16-bit in days when 64-bit was introduced, however, look how fast industry moved to 32-bit when it was introduced in consumer/user sector compared to 64-bit. You got already in Windows 95 and adoption rate was "forced" (there was no 16-bit versions of Windows afaik, even tho 16-bit DOS kernel and layers were used). But transition to 64-bit took so much more time, it's resonable to assume that no one will rush towards 128-bit in consumer market, especially when there was no realistic need, regardless if it's office or not, if limitation is "physical" and if just one "use case" is affected by that limitation, you can expect push towards it, and I simply do not see it anytime soon (meaning 3 to 10 decades) for consumer market. I could be totally wrong, but it is huge streach to assume that consumer market would use 18 million terabytes in next 100 years, and I seriously doubt that consumer market would ever approach near that number, and reasons are mainly of physical limitations, unlike in the past where reasons were mainly a technological limitation. Unless we are talking quantum computing that might do a revolution, but then, talking about number of bits becomes useless, and I do not really understand how qbits address and use memory in those experimental computers, and even less I understand what is actual memory for computing? Is it physical electronic object/transistor or is it on atomic/sub atomic base?

    dungeon I can't, because I see completly different problem compared to what was the problem historically, that doesn't make me right on the matter, it's just what I see .

    As for resolution, yeah I see your point, I expressed myself badly, what I ment to say is that game have its resolution that is not native to LCD displays. I know that NES used 8:7 AR, every decent emulator actually have that option, some (like nestopia i think) even force 8:7 by default. However, as many others, I've used to play those games on 4:3 AR on CRT television, so I'm used to it "from the begining". The thing is, LCD's have native resolutin, and that's their main disadvantage compared to CRT's that can scale and look great at any of the resolutions used by games (as it is shown in the video you linked, it's more on developers hands to choose).
    As an side note, there were other things that can make retro-gaming more confusing to the people not familiar with hardware and regions. For example, I used to play Sega Genessis (known as Mega Drive in Europe) in PAL region, that was 50Hz, however, Japan and US (and some others) used NTSC (60Hz) standard, and because it have a lot to do with frequency of electrical grid implementation, and since Sega (and other consoles) were Japanesse of origin, they are created with NTSC on mind. Long story short, I used to play Sonic for the first time on PAL system, and forget about it, latter, when I attempted emulation back in 2000's for retro, I had a "feeling" that game is really different, but because a lot of time has passed since the last time I played I didn't figure out what the problem was (and teh fact that I got NTSC ROM's from the CD didn't help either...). Only latter I found out that the game was faster because it was emulated on NTSC standard, and that ROM's were NTSC, by that time I have used to it, and PAL felt strange to me.

    Leave a comment:


  • slacka
    replied
    Originally posted by kneekoo View Post
    The people using 15+ years old hardware surely have to understand that it's time to move on to something newer. Second-hand PCs capable of 64-bit software are quite cheap, and that's great news.
    Why don't people understand that newer isn't necessarily better. My gorgeous 16:10 Precision laptop has amazing build quality and a screen that you can't buy anymore. With PAE I can use all 4GB of Ram even though my Core CPU is 32-bit only. Why push people to generate e-waste, when many 32-bit machines are still more than powerful enough for their users needs? For basic web browsing and SSH'ing that CPU is still overkill.

    Originally posted by kneekoo View Post
    As long as there's still new hardware being sold with low RAM, there's a need for 32-bit software. The software people mostly seem to ignore this, the users are obviously not technically apt enough to understand the problem, and here we are, looking at more distros taking the options away, letting a lot of people trash their storage devices with swap. Great!
    It's not just low RAM consumer hardware. My company saved terabytes of RAM by running it's low RAM servers 32-bit. Delaying the transition to 64-bit allowed them to skip at least 1 HW upgrade cycle.
    Last edited by slacka; 07 May 2018, 04:39 PM.

    Leave a comment:


  • dungeon
    replied
    Originally posted by leipero View Post
    calc Also if you want to use native resolution and play the game as it was "ment to be played", LCD's are out of question, since my last CRT died last year, I find LCD's useless for NES/SNES games, and all the tricks of using OpenGL instead of software drawing and blurs and all sorts of "improvements" just make game look far worse compared to how it should look, plus, on 3 different low-end displays I have a problem with blur that turns "network like" 8-bit textures to look like a c**p when camera is moving (example, fence at first level in first title of Ninja Gaiden/Ryukenden), it just hurt my eyes and there's no way I can solve it (unless buying extremely expensive display that MIGHT solve that problem, but i seriously doubt).
    There is no such thing as native resolution nor "meant to be played", that reminds me only of marketing slogan of some company . NES/SNES actually internally do 8:7, that is exactly correct pixel aspect ratio per what hw do... so it was streched to 4:3 anyway on CRTs (that is what most remember of it, but what most remember is slightly incorrect really):



    It is really just 8:7 what hardware do, now some games had improperly fixing it for 4:3 because well most users used CRTs 4:3 ... so you have some of these games which looks best on one or another AR
    Last edited by dungeon; 07 May 2018, 03:02 PM.

    Leave a comment:

Working...
X