Announcement

Collapse
No announcement yet.

Open-Source CPU Architecture Pulled Into Linux 3.1 Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Open-Source CPU Architecture Pulled Into Linux 3.1 Kernel

    Phoronix: Open-Source CPU Architecture Pulled Into Linux 3.1 Kernel

    The latest feature to be pulled into the Linux 3.1 kernel is support for OpenRISC, an open-source CPU architecture...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    is it just me or is anyone starting to feel uncomfortable with the amount of unnecessary drivers and features being added to the kernel? the kernel is getting fatter and fatter with every release, and nearly everything "fattening" it are things like more compatible devices.

    i'm definitely excited and welcoming to out-of-the-box compatibility but certain things like this cpu or wiimote or kinect compatibility should be an optional download. it wouldn't surprise me if the kernel dropped over 100mb in size if it only included what people used.

    Comment


    • #3
      Originally posted by schmidtbag View Post
      is it just me or is anyone starting to feel uncomfortable with the amount of unnecessary drivers and features being added to the kernel? the kernel is getting fatter and fatter with every release, and nearly everything "fattening" it are things like more compatible devices.

      i'm definitely excited and welcoming to out-of-the-box compatibility but certain things like this cpu or wiimote or kinect compatibility should be an optional download. it wouldn't surprise me if the kernel dropped over 100mb in size if it only included what people used.
      Yes, it's getting fatter. But the parts that your system doesn't use, never even get loaded or used. Especially something like a new CPU architecture: if you compile an x86 kernel, absolutely no code will be included in the kernel to support a CPU designed for a different architecture! The only "fat" you'll notice is in the size of the git repo and the size of kernel source downloads.

      Yes, adding new hardware driver support does gradually increase the size of the kernel modules in general-purpose Linux distros, but distros are free to omit any drivers that they think are so marginally used that they are unnecessary. IIRC, Ubuntu recently nixed some network drivers that supported networking protocols that are used in less than 0.0000001% of currently-operating computers, and 0.0% of modern computers.

      I don't mind if the source code is bloated. Developers doing develop-y things have different constraints than people trying to use a compiled Linux distribution. Source code is allowed to be huge, and it's expected to be configurable so that you can pare it down to only what you need. Developers are supposed to have beefy rigs with lots of free disk space and a good Internet connection. I don't think it is wrong to continue to push the bloat of the sources, as long as we maintain configurability so that, at least, it is possible to pare down a kernel to be lean and mean.

      As long as it's possible, then it's up to the distros to maintain their .config files in a sane way, providing the best tradeoff of size, device compatibility and performance. And if you have a problem with the particulars of one distro's kernel build, take it up with the distro, not with upstream Linux.

      Back on the topic of the article: I'm really glad that people are still pushing forward with open hardware. Hopefully this open knowledge about hardware will spread to graphics processing, and in a few years we'll be able to purchase a video card with completely open hardware that's at least competitive with Intel IGPs of the time. Then it would be a no-brainer to write open source drivers for it, because you don't have to beg the manufacturer to release little tidbits of "sanitized" information about their "intellectual property" hardware (yes, AMD, I'm making fun of you and your ridiculous anti-competitive chess moves.)
      Last edited by allquixotic; 25 July 2011, 06:34 PM.

      Comment


      • #4
        Originally posted by schmidtbag View Post
        is it just me or is anyone starting to feel uncomfortable with the amount of unnecessary drivers and features being added to the kernel? the kernel is getting fatter and fatter with every release, and nearly everything "fattening" it are things like more compatible devices.

        I don't mind newer stuff being added but I would however would like to see a trimmed down version of the code that strips out a ton of that legacy support for the older hardware if not for anything else but bringing the config options where a person isn't barraged with items like ISA/Microchannel/etc etc config options that are usually set to be built modules for stuff that next to nobody uses anymore.

        Comment


        • #5
          Originally posted by deanjo View Post
          I don't mind newer stuff being added but I would however would like to see a trimmed down version of the code that strips out a ton of that legacy support for the older hardware if not for anything else but bringing the config options where a person isn't barraged with items like ISA/Microchannel/etc etc config options that are usually set to be built modules for stuff that next to nobody uses anymore.
          What would a "trimmed down version of the code" do that:

          1. editing the .config once (in your entire lifetime) to exclude all the subsystems and hardware you don't care about. Time investment: hours, but a one-time cost. Save your .config in your email, in your cloud storage locker, on your tape backup, or print it out on paper -- whatever. Just keep it.
          2. make oldconfig

          couldn't do?

          (size of sourcecode download notwithstanding)

          Comment


          • #6
            Originally posted by allquixotic View Post
            What would a "trimmed down version of the code" do that:

            1. editing the .config once (in your entire lifetime) to exclude all the subsystems and hardware you don't care about. Time investment: hours, but a one-time cost. Save your .config in your email, in your cloud storage locker, on your tape backup, or print it out on paper -- whatever. Just keep it.
            2. make oldconfig

            couldn't do?

            (size of sourcecode download notwithstanding)

            It is hardly a one time cost. Not everybody is limited to one config. Everygood project over it's life time should go through trim the fat stages and refactoring of code a smaller more managable code base makes this easier to do. make oldconfig also carries the risk of still using depricated solutions instead of their modern replacements. It is a bad habit to get into in an environment as volitile and changing as the linux kernel is. Keeping ancient support doesn't improve the maintainability of the code either. The gain by keeping the old hardware support is next to non existent but it does increase the maintaining payload.
            Last edited by deanjo; 25 July 2011, 10:00 PM.

            Comment


            • #7
              Originally posted by deanjo View Post
              It is hardly a one time cost. Not everybody is limited to one config. Everygood project over it's life time should go through trim the fat stages and refactoring of code a smaller more managable code base makes this easier to do. make oldconfig also carries the risk of still using depricated solutions instead of their modern replacements. It is a bad habit to get into in an environment as volitile and changing as the linux kernel is. Keeping ancient support doesn't improve the maintainability of the code either. The gain by keeping the old hardware support is next to non existent but it does increase the maintaining payload.
              What you consider "ancient" for desktop usage is just a spring chicken for enterprise (and especially Government) users. I have worked a job last year where acquiring a mainframe with specs comparable to an average gaming desktop from 1996 would be considered "state of the art" (relative to what they had before that). Why do they use such old hardware?

              Well, for many many reasons, but #1 is that the ancient hardware is extremely well-understood, and so you can deploy extremely secure operating systems that run on top of it, using semi-formal methods to prove the device drivers and hardware correct to a very high degree of certainty. And it's entirely possible that they will want to choose Linux as their operating system base; certify it to their satisfaction; and deploy it to production. But when they begin the certification process, they may as well use the latest code: first of all, the code will be old by the time certification is complete; second of all, they do enough verification and validation internally that even if the original authors consider the code to be "bleeding edge", it won't be by the time they're done with it.

              If we start pulling code out of Linux after it almost universally stops being used in desktop computers for, say, 5 years, anyone who is trying to take this kind of extremely conservative approach will not be able to use Linux (or they'll have to use a very old version, which is to no one's advantage, because they may ask kernel developers to support an old version, and nobody likes doing that).

              The "maintenance burden" as you call it is basically this: as the internal kernel APIs change, you need to make sure that you don't break all the drivers that call those APIs. I think it is perfectly valid to expect anyone who intends to change an API to understand it well enough to fix any drivers that they break: at a bare minimum, they should maintain the same level of functionality as was present before their change. If they manage to enhance functionality, that's great, but no one can expect that an API revisionist is going to go out of his/her way to improve every driver out there.

              Eventually, that maintenance burden will either increase the number of developers required to introduce API breaks; or, increase the amount of time it takes a single developer to do the same. That isn't such a bad thing; many other successful operating systems hardly ever change their kernel APIs, and certainly do so much less often than Linux.

              The only way to reasonably manage this maintenance burden is to reduce the rate of change of central kernel APIs that affect a lot of drivers. I don't consider the removal of working hardware support for old devices to be a valid solution.

              Anyway, the maintenance burden on the kernel maintainers is probably the strongest argument in favor of removing old drivers, and I'm sure those voices will get louder as time goes on, despite the fact that those old drivers will still have some users.

              What is not a strong argument is this whole thing about the burden on users or hobbyists who compile their own kernel. I mean, come on. If you're too lazy to maintain a selective .config, you really have no business compiling a kernel in the first place.

              But, happily, there is something that I think can be done to mitigate the whining from end-users and hobbyists about source bloat. It won't ease the maintenance burden for the kernel maintainers (in fact, it will only increase their burden), but it could definitely cut down on the source download size.

              What would that be, you ask? My proposal is to do to the kernel what the X.Org guys did to the X server: modularize it.

              So there'd be a central Linux repository that contains no hardware drivers, just core OS code, headers, build scripts, and infrastructure shared by all hardware.

              Then there'd be architecture-specific repositories, so you can just check out the x86 repo or the x86-64 repo to get your desired architecture files. So never again will you have to look at source files for S/390 or Motorola 68k or OpenRISC if you just use an x86-64 CPU.

              Then you'd have two repositories for each major subsystem (gpu, net, char, etc etc): one that contains drivers for hardware that is still "reasonably relevant" (as determined arbitrarily by the subsystem maintainer), and one that contains drivers for hardware that is "obsolete" (again, arbitrarily judged by the maintainer).

              So if you wanted to only check out sources for a modern x86-64 desktop, the likes of which Michael reviews on Phoronix, you would have to grab several repos:

              1. the base Linux repo
              2. the x86-64 arch repo
              3. the non-obsolete ("current") subsystem repos for the subsystems you care about, like net, gpu, or whatever else there is. If you don't use SD cards, you could completely skip that subsystem. If you don't use telephony devices, you could skip that subsystem. Defining the correct granularity of the repositories would be a practical balance: we don't want a ridiculous explosion of repositories, but we want it to be fine-grained enough that people don't get tens of megabytes of source that they'll never in a hundred years want to use.

              For those who like tarballs, Linus would just tarball up each subsystem repo and release them in a directory for each RC and stable release of the kernel. So instead of a 100+ MB linux-3.0.0.tar.bz2, you'd have 20 or 30 tarballs ranging from a couple hundred kilos to a couple megs.

              The additional work of the repos wouldn't be too hectic for developers or users (nothing some good scripts can't solve), but the main problem would be that you'd lose commit atomicity across repos: if you're making a kernel API change, you can't make an atomic commit across 4 or 5 git repos to touch all the subsystems that are affected by your change. That would kind of suck, and is a severe limitation of this approach from a developer's perspective.

              Basically I don't have any really good answers (I don't even think the modularity is a particularly good idea upon reflection, but it was worth exploring). The only "safe" answer I can think of is to keep supporting the ancient hardware forever, and just suck it up and download your huge monolithic kernel tarball -- i.e., business as usual.

              Comment


              • #8
                Originally posted by allquixotic View Post
                What you consider "ancient" for desktop usage is just a spring chicken for enterprise (and especially Government) users. I have worked a job last year where acquiring a mainframe with specs comparable to an average gaming desktop from 1996 would be considered "state of the art" (relative to what they had before that). Why do they use such old hardware?
                And how many of those systems are running a kernel remotely close to the current one? Chances are none. They are probably (if using linux at all) using a 2.4 or even older kernel. Most of those systems are pretty much autonomous and rarely see any change. Hell I used to maintain an ILS system for a small city airport that used to run on a Tandy Model III up to 1994. Eventually however the cost exceeds the benefit. Many of my clients have recently moved on from ancient VACs and AS/400 systems. It got to the point however that it is not cost justified ("green footprint" wise, financial wise, and administration wise).

                Well, for many many reasons, but #1 is that the ancient hardware is extremely well-understood, and so you can deploy extremely secure operating systems that run on top of it, using semi-formal methods to prove the device drivers and hardware correct to a very high degree of certainty. And it's entirely possible that they will want to choose Linux as their operating system base; certify it to their satisfaction; and deploy it to production. But when they begin the certification process, they may as well use the latest code: first of all, the code will be old by the time certification is complete; second of all, they do enough verification and validation internally that even if the original authors consider the code to be "bleeding edge", it won't be by the time they're done with it.
                Many of those such systems can be replaced at a fraction of the cost with more modern equivalents. As mentioned above many of my clients migrated from VACs, AS/400 (as well as OS2 and others). Most of them replace these huge furnaces with a single off the shelf machine that absolutely obliterates their old systems in performance. Also many of the people that are qualified and familiar with these old systems are gradually falling off making it even harder to find qualified personel to help them should issues or expansion needs arise.

                Distros themselves see the ever decreasing demand for these old systems and adjust their baseline minimum specs accordingly. At one time many distros offered items like PPC, Alpha, <insert defunct arch> but as times goes on that demand tapered off to near non existence. They no longer want to maintain it, the app developers have no interest in maintaining their apps for those old systems either. In the case of linux many old bugs never really get addressed and fixed anyways. There is a busload of old hardware bugs that are likely never to be fixed (just visit alsa's bug tracker for many prime examples.).

                If we start pulling code out of Linux after it almost universally stops being used in desktop computers for, say, 5 years, anyone who is trying to take this kind of extremely conservative approach will not be able to use Linux (or they'll have to use a very old version, which is to no one's advantage, because they may ask kernel developers to support an old version, and nobody likes doing that).
                5 years would be to short, 10 years however would be well within reasonable time period however. Even LTS distro's span that long (again the cost outweighs the benefit).

                The "maintenance burden" as you call it is basically this: as the internal kernel APIs change, you need to make sure that you don't break all the drivers that call those APIs. I think it is perfectly valid to expect anyone who intends to change an API to understand it well enough to fix any drivers that they break: at a bare minimum, they should maintain the same level of functionality as was present before their change. If they manage to enhance functionality, that's great, but no one can expect that an API revisionist is going to go out of his/her way to improve every driver out there.
                I have to disagree with you there. If a change to an API breaks hardware that maybe 100 people on earth still use (if you are lucky) then they should be free to remove that ancient support as that impact is minimal. As we see with the 2.4 kernel which is still widely used those people still can use that older iteration of the kernel that has that support (and still probably in a functional state).

                Eventually, that maintenance burden will either increase the number of developers required to introduce API breaks; or, increase the amount of time it takes a single developer to do the same. That isn't such a bad thing; many other successful operating systems hardly ever change their kernel APIs, and certainly do so much less often than Linux.
                Not any prominent OS that is still in wide deployment other then linux. It rarely brings in more developers as well. It simply sits as old stagnent code that only a rare few can even verify that it still works with the hardware on hand.

                The only way to reasonably manage this maintenance burden is to reduce the rate of change of central kernel APIs that affect a lot of drivers. I don't consider the removal of working hardware support for old devices to be a valid solution.
                That would be a start however it still doesn't justify maintaining old hardware where verification of functionality doesn't exist anymore by the developers.

                Anyway, the maintenance burden on the kernel maintainers is probably the strongest argument in favor of removing old drivers, and I'm sure those voices will get louder as time goes on, despite the fact that those old drivers will still have some users.

                What is not a strong argument is this whole thing about the burden on users or hobbyists who compile their own kernel. I mean, come on. If you're too lazy to maintain a selective .config, you really have no business compiling a kernel in the first place.
                That is a bit of an elitistic view.

                But, happily, there is something that I think can be done to mitigate the whining from end-users and hobbyists about source bloat. It won't ease the maintenance burden for the kernel maintainers (in fact, it will only increase their burden), but it could definitely cut down on the source download size.
                Download size is the least of the concern when we are talking about especially when patch sets offer the same code for a fraction of the size of a full tar ball (which you are more likely to download even more if you have a slow net connection or heaven forbid dial up).

                What would that be, you ask? My proposal is to do to the kernel what the X.Org guys did to the X server: modularize it.

                So there'd be a central Linux repository that contains no hardware drivers, just core OS code, headers, build scripts, and infrastructure shared by all hardware.

                Then there'd be architecture-specific repositories, so you can just check out the x86 repo or the x86-64 repo to get your desired architecture files. So never again will you have to look at source files for S/390 or Motorola 68k or OpenRISC if you just use an x86-64 CPU.

                Then you'd have two repositories for each major subsystem (gpu, net, char, etc etc): one that contains drivers for hardware that is still "reasonably relevant" (as determined arbitrarily by the subsystem maintainer), and one that contains drivers for hardware that is "obsolete" (again, arbitrarily judged by the maintainer).

                So if you wanted to only check out sources for a modern x86-64 desktop, the likes of which Michael reviews on Phoronix, you would have to grab several repos:

                1. the base Linux repo
                2. the x86-64 arch repo
                3. the non-obsolete ("current") subsystem repos for the subsystems you care about, like net, gpu, or whatever else there is. If you don't use SD cards, you could completely skip that subsystem. If you don't use telephony devices, you could skip that subsystem. Defining the correct granularity of the repositories would be a practical balance: we don't want a ridiculous explosion of repositories, but we want it to be fine-grained enough that people don't get tens of megabytes of source that they'll never in a hundred years want to use.

                For those who like tarballs, Linus would just tarball up each subsystem repo and release them in a directory for each RC and stable release of the kernel. So instead of a 100+ MB linux-3.0.0.tar.bz2, you'd have 20 or 30 tarballs ranging from a couple hundred kilos to a couple megs.
                This would be a good solution. It could also serve as a good barometer as to who is using linux on what hardware.

                The additional work of the repos wouldn't be too hectic for developers or users (nothing some good scripts can't solve), but the main problem would be that you'd lose commit atomicity across repos: if you're making a kernel API change, you can't make an atomic commit across 4 or 5 git repos to touch all the subsystems that are affected by your change. That would kind of suck, and is a severe limitation of this approach from a developer's perspective.

                Basically I don't have any really good answers (I don't even think the modularity is a particularly good idea upon reflection, but it was worth exploring). The only "safe" answer I can think of is to keep supporting the ancient hardware forever, and just suck it up and download your huge monolithic kernel tarball -- i.e., business as usual.
                I'm not a fan of keeping the status quo. To me it is just as bad as package developers that never remove old requires that serve no real purpose anymore.

                Comment


                • #9
                  There is a busload of old hardware bugs that are likely never to be fixed (just visit alsa's bug tracker for many prime examples.).
                  Wrong anecdote, I've never seen an alsa driver bug get fixed, no matter how new the HW it was filed for.


                  If download size matters to you, just use git like Linus says. Perhaps you could also use an old tarball as the base and fill in the rest with rsync.

                  Comment


                  • #10
                    Originally posted by curaga View Post
                    Wrong anecdote, I've never seen an alsa driver bug get fixed, no matter how new the HW it was filed for.
                    Hehe, yeah. They are just too overworked.

                    Comment

                    Working...
                    X