Announcement

Collapse
No announcement yet.

Linux Set To Shed Nearly 500k Lines Of Code By Dropping Old CPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux Set To Shed Nearly 500k Lines Of Code By Dropping Old CPUs

    Phoronix: Linux Set To Shed Nearly 500k Lines Of Code By Dropping Old CPUs

    As expected, the Linux 4.17 kernel will move ahead with dropping support for older/unmaintained CPU architectures...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    TBH I haven't even heard of some of those, but it's still sad to see support for any kind of HW go. This broad architecture support is a real strength of Linux (and FOSS in general). You can always use some older HW or something more exotic for your purposes.
    I remember havin read about Blackfin, and to me it feels like it's not so long ago that Linux was ported to Blackfin. Now it's already called obsolete? And code that is not active at runtime doesn't drag down others. Of course, you need maintainers to keep the code compilable and working.

    Otoh. I don't know what "ported" actually means in these cases. Maybe some architectures there had virtually no userland and just the kernel would boot and one might not even have a real shell. In that case use is so limited it would probably be okay to drop the code.
    Stop TCPA, stupid software patents and corrupt politicians!

    Comment


    • #3
      Maybe users of all these legacy architectures can transition to ARM or RISC-V?

      Comment


      • #4
        It is not like Linux will not work on these architectures anymore. It simply means that the latest kernels will not be compatible. One can always keep using older revisions. As there is appartently no to very little development going on with these platforms, that might be enough for their use cases.

        Comment


        • #5
          Originally posted by uid313 View Post
          Maybe users of all these legacy architectures can transition to ARM or RISC-V?
          It's more like there aren't any users of these architectures anymore... well I think Tilera would be an exception not sure why they are dropping them as those are still produced as far as I know by Mellanox which bought EZChip and therefore Tilera.

          They are definitely actively sold in high end routers from MikroTik also...
          Last edited by cb88; 02 April 2018, 09:17 AM.

          Comment


          • #6
            Originally posted by Adarion View Post
            And code that is not active at runtime doesn't drag down others.
            It drags down developers. Say they are working on virtual memory or cpu scheduler. This code might need to be updated just so that it compiles. Sometimes a change can break this code so you end up with broken code in the stable release which is arguably worse than not having it at all.
            Last edited by paulpach; 02 April 2018, 11:21 AM.

            Comment


            • #7
              Originally posted by cb88 View Post

              It's more like there aren't any users of these architectures anymore... well I think Tilera would be an exception not sure why they are dropping them as those are still produced as far as I know by Mellanox which bought EZChip and therefore Tilera.

              They are definitely actively sold in high end routers from MikroTik also...
              M32R, Blackfin, FR-V, and Hexagon are most assuredly still used - just mostly with RTOS's, not Linux.

              Tilera is still shipped by Mikrotik but is effectively a dead end - it's been replaced by the ARM-based BlueField family.

              Comment


              • #8
                Tminus minus 500k lines of code till kernel 5.0

                Comment


                • #9
                  Originally posted by paulpach View Post

                  It drags down developers. Say they are working on virtual memory or cpu scheduler. This code might need to be updated just so that it compiles. Sometimes a change can break this code so you end up with broken code in the stable release which is arguably worse than not having it at all.
                  Which is why I argue in this particular area of kernel design, Microsoft was right. The Kernel shouldn't care about architecture specific code, it should all be handled externally.

                  Comment


                  • #10
                    Originally posted by gamerk2 View Post
                    Which is why I argue in this particular area of kernel design, Microsoft was right. The Kernel shouldn't care about architecture specific code, it should all be handled externally.
                    Since when Microsoft has any interesting thing to say about community-developed multi-arch kernels again?

                    Also, keeping everything unified (either at code or repo level) means that everyone shares the performance improvements, which is one of the main selling points of opensource. If all was neatly compartimentalized so that whatever module you write has 0 effect on anything else, then there would be very little cooperation and mutual improvement.

                    Microsoft has a proprietary product they DO NOT want third parties to touch, so it makes sense to make everything as compartimentalized as possible so that third parties can extend it without having to know anything about the kernel.

                    Comment

                    Working...
                    X