Announcement

Collapse
No announcement yet.

AVR32 Architecture Called For Removal From Mainline Linux Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AVR32 Architecture Called For Removal From Mainline Linux Kernel

    Phoronix: AVR32 Architecture Called For Removal From Mainline Linux Kernel

    It looks like the mainline Linux kernel will support one less CPU architecture come Linux 4.12...

    http://www.phoronix.com/scan.php?pag...-Mainline-4.12

  • #2
    nowadays all processors which don't support 64bit could be discontinued, in order to abandon the 32 bit platforms for 64bit operating system. This choice would allow to save development time.

    Comment


    • #3
      Originally posted by Azrael5 View Post
      nowadays all processors which don't support 64bit could be discontinued, in order to abandon the 32 bit platforms for 64bit operating system. This choice would allow to save development time.
      I don't like the idea of discontinuing a perfectly working and still used architecture.
      I still use a 32 bit processor in one of my PC's so i hope they will wait some years before doing that. Only because in rich counties and in rich households 32bit PC's are not used anymore that doesn't mean they aren't used at all. They are supporting much more exotic things than 32bit. I think the development time for this is well spent.

      Comment


      • #4
        Typo:

        Originally posted by phoronix View Post
        This initial dropping of AVR32 architecure code

        Comment


        • #5
          Originally posted by Azrael5 View Post
          nowadays all processors which don't support 64bit could be discontinued, in order to abandon the 32 bit platforms for 64bit operating system. This choice would allow to save development time.
          Not so fast. A lot of embedded devices are 32bit only and the Linux kernel is king on this environment.

          Comment


          • #6
            Seems reasonable to remove AVR32. This was Atmel's try to make a 32-bit architecture for microcontrollers and it failed pretty miserably in the market. It never took off and quickly was displaced by ARM's Cortex-M line of cores.

            Comment


            • #7
              Originally posted by Azrael5 View Post
              nowadays all processors which don't support 64bit could be discontinued, in order to abandon the 32 bit platforms for 64bit operating system. This choice would allow to save development time.
              While an argument can be made in the closed source world for cost/time saving with narrowly focused non-portable programming, it doesn't work that way with open-source. With F(L)OSS code re-use with deployment on heterogeneous, highly scalable systems is the norm, and the higher code quality resulting from portable development keeps progress moving forwards instead of stagnating or becoming trapped in isolated silos. I'm very concerned about how the attitude seems to be moving away from the practises which have allowed us to develop the systems we seem to now be taking for granted. Linux distributions just work or many hardware platforms because the disparate parts are able to operate together and can adapt to changes in the underlying technology from source code portability. I'm beginning to feel like a voice in the wilderness...

              Comment


              • #8
                Originally posted by Azrael5 View Post
                nowadays all processors which don't support 64bit could be discontinued, in order to abandon the 32 bit platforms for 64bit operating system. This choice would allow to save development time.
                Please learn to make a difference between x86 32 bit and <any other arch> 32 bit. For x86 the 32bit-only processors are ancient shit, for any other arch they aren't. Most networking equipment and embedded devices are using 32bit processors and there are no good reasons to have them on 64bit.

                AVR32 is a failed architecture, and it makes sense to just be let go.

                Comment


                • #9
                  Originally posted by s_j_newbury View Post
                  While an argument can be made in the closed source world for cost/time saving with narrowly focused non-portable programming, it doesn't work that way with open-source.
                  Yes it works that way in open source too, because man-hours provided by paid devs and volunteers are still a finite amount. If it stops making sense, it gets dropped.

                  Sure on Linux kernel and FOSS stuff usually gets dropped when it's really ok to drop it (like the ancient processor Linux was born on, or 15+ yo failed GPUs, or like this failed architecture) and not because of commercial decisions, so that's far better, but don't think that stuff is kept alive for free just because of ideals.

                  Linux distributions just work or many hardware platforms because the disparate parts are able to operate together and can adapt to changes in the underlying technology from source code portability. I'm beginning to feel like a voice in the wilderness...
                  FYI: kernel code isn't portable but multi-platform. There is a core that works more or less the same everywhere, but a large part of it is platform-specific. If the platform-specific part is wasting everyone's time or is just sitting there bitrotting (i.e. unmaintained, which is far worse), then it makes sense to drop it.

                  Please don't be too attached to stuff, it comes and goes.

                  Comment


                  • #10
                    Originally posted by starshipeleven View Post
                    Yes it works that way in open source too, because man-hours provided by paid devs and volunteers are still a finite amount. If it stops making sense, it gets dropped.

                    Sure on Linux kernel and FOSS stuff usually gets dropped when it's really ok to drop it (like the ancient processor Linux was born on, or 15+ yo failed GPUs, or like this failed architecture) and not because of commercial decisions, so that's far better, but don't think that stuff is kept alive for free just because of ideals.
                    That was in response to Azrael5. I've no problem with dead code being dropped. Anyway, with a SCM project like the Linux kernel it's always there in the history.
                    FYI: kernel code isn't portable but multi-platform. There is a core that works more or less the same everywhere, but a large part of it is platform-specific. If the platform-specific part is wasting everyone's time or is just sitting there bitrotting (i.e. unmaintained, which is far worse), then it makes sense to drop it.
                    Actually, the Linux kernel started out entirely non-portable, i386 specific, much of it written in ASM. It took a lot of work to hack it to run on DEC Alpha back in the day. Since then, the policy has been to produce portable properly abstracted new code, and where possible isolate and replace platform specific code. The kernel is much more portable now than it was at v1.0.
                    Please don't be too attached to stuff, it comes and goes.
                    That's not my point at all. Primarily I'm trying to point out that new stuff is based on or built-on top of old stuff. Writing portable code makes it adaptable, more robust and useful going forward. Throwing everything out and starting again, recreating the wheel each time, is something that happens all the time in closed source development, but it's not something to aspire to, and it doesn't obviously improve development time or reduce costs.
                    Last edited by s_j_newbury; 05-01-2017, 10:38 AM.

                    Comment

                    Working...
                    X