Announcement

Collapse
No announcement yet.

The Linux Kernel May Finally Phase Out Intel i486 CPU Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Dumb question; with some companies still actively making 1GHz+ 486 CPUs, what does this mean for them?

    Comment


    • Originally posted by Eirikr1848 View Post
      Dumb question; with some companies still actively making 1GHz+ 486 CPUs, what does this mean for them?
      This is not quite as straight forwards.


      486 related cpu running at 1Ghz you would be talking about a Vortex86. They are not only a 1ghz but they are dual core. These have i586 instructions and some i686 instructions yes that requirement for dual core.

      The cpus that are 486 instruction set or before are not that high performing. You are talking 300Mhz systems max not 1Ghz. All the ones you see that are 400Mhz or faster are dual cores so have i586/i686 instructions.

      Catch is that the last batch of pure 486 chips made is not a that far gone

      https://www.datasheetarchive.com/wha...28ef63d18.html
      2013 is the last batch of i486 or before only CPU being made for embedded market. So we are just coming up on 10 years old.

      Comment


      • Originally posted by Eirikr1848 View Post
        Dumb question; with some companies still actively making 1GHz+ 486 CPUs, what does this mean for them?
        https://en.wikipedia.org/wiki/Linux_...ersion_history
        https://wiki.linuxfoundation.org/civ...platform/start
        For SLTS v5.10 projected EOL = 2031-01.

        Supposedly Linux 6.1 will be next SLTS, and it supports i486.

        So they have time till 2031 or even more. After that detach that system from LAN and go on.​

        Comment


        • Not sure if it's been mentioned yet, (I wanted to add this months ago!) but NASA used 486 CPUs because the older, larger components are less susceptible to space radiation. So.... what about that?

          Comment


          • Originally posted by timpster View Post
            Not sure if it's been mentioned yet, (I wanted to add this months ago!) but NASA used 486 CPUs because the older, larger components are less susceptible to space radiation. So.... what about that?

            Nasa is moving off 486 a while back. Radiation resistance for space does require more design considerations the smaller the nm the limitation on what nm of silicon works in space has moved forwards a lot. Space operation requirement now exceed what a 486 can offer in processing power.
            Legacy space systems will be around for a while the newer systems will most likely be arm or risc used by nasa for core systems.

            Comment


            • Normal thing, legacy hardware is deprecated

              Comment


              • Originally posted by Svyatko View Post

                Firefox & Mesa 3D requires SSE2, Chrome browser requires SSE3.
                My system with 3 GiB RAM runs in 64 bit faster than in 32 bit.
                With small amount of RAM Linux with swap on NVME drive works OK.
                32-bit OSes support only 2 GiB for system + 2 GiB for applications.
                Forget about 32-bit OSes for modern web browsing and desktop.
                New AM4 A320 mobo + A6 APU + 4-8 GiB of RAM can cost about $100 and supports x86-64-v3.
                Just ran across that and could not help myself...

                In true x86 manner once the 32-bit x86 architecture hit the natural 32-bit address bus limit, it found a way to expand that via segmentation (PAE, physical address extension).

                So 32-bit CPUs and OSs soon supported 2GB per system and 2GB per application and at least on the application side you could have many of them and Xen actually remained 32-bit code running 64-bit operating systems for a long time.

                Yet an extended base architecture (x86_64 in this case) replaced the older segmented architecture before most systems ever reached the full potential of the extended base. PAE in theory could have gone to full 64-bit physical addresses but in reality only ever did 36-bits or 64GB or RAM. By the time that amount of RAM became a reality, x86_64 had long won.

                And it was the same with the 80286 which offered a truly whopping 16MEGABYTES of RAM, far too much memory to physically fit in to a box at the time and far to expensive to own. Even most 80386 systems, which came only 2 years, later didn't reach 16MB for a while and I doubt a full 16MB 80286 system was ever run outside a lab.

                A 64GB PAE system before AMD64 came along? I'd doubt that, too, but of course you could run a PAE OS on a 64GB laptop today, fully exploiting all that RAM without any bit of 64-bit code (well you could actually run 64-bit instructions on a 32-bit OS and application, but that's another matter).

                80286 could have run quite huge applications and there is no reason you couldn't run even today's biggest applications as collections of server-less elements written in max chunks of 2GB of RAM. AFAIK nobody is running a system with 16 exabytes or RAM (PAE theoretical maximum).

                So strictly speaking you're wrong, but then again you're right, because it's so much easier to use the bigger address space... even if it wastes quite a lot of address bits and RAM in doing so.

                Which then inspires people to 'abuse' those bits for memory tagging.

                Comment

                Working...
                X