Announcement

Collapse
No announcement yet.

A Massive ARM v6/v7 Rework Is Landing With Linux 4.5 Plus Raspberry Pi 2 Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by schmidtbag View Post
    Understood - I'm aware of the things you mentioned, but there are very simple solutions for that. For example, the kernel could come with all [open source] drivers for all ARM platforms, and there could be a very early-stage init script (maybe even implemented into uboot or whatever) of some sort that loads a config file (such as uEnv.txt) custom to whichever platform you're using. This would define how each piece of hardware is handled. I don't see why something like that would be hard to implement. Everyone could use the same kernel build, each distro could supply one generic disk image, and all users have to do is copy some config file that specifies what their platform has and how it should behave. But like I said, if it were that simple it should've been done a long time ago. I'm guessing there's something much more complicated that is getting in the way, because to me this solution is a little too obvious.

    I do understand, however, that some of the closed-source drivers may severely complicate this.
    I don't think the closed src drivers are the issue (as far as the time taken to get all the armv7/v6 platforms ported over to multi-platform kernel). From the upstream kernel perspective, those drivers just simply do not exist.

    It was more the case that there were a lot of different arm platforms in-tree, which needed to be ported over to the new way of things. But most of the more active/supported arm SoC's have been multi-platform for quite a long time already.

    Comment


    • #12
      I was very disappointed when i have bough my first ARM board and discover that all ARM HW arent compatible, i just though that is same as with x86.

      In theory it could make a sense to use specific close to metal codebase, but in reality its proven non working solution, for this you need really lots good lowlevel developers and that could afford only big companies.. But because is arm software in general of free, its doesnt make a sense, you could charge only for support and charge for support means big companies stuff - big servers and ARM servers are still niche.

      We can say that if is any market suitable for be driven by community, it is ARM one, small SOC creators charge just for HW, they have not resources for big software development, but could just add some piece of code for everyone and everyone could be happy.

      Future could be iteresting, but is more and more obvious, that future is in Android and universal kernel and apps for Arm and x86 and universal apps which could be used on miniPC (i thing that half of market with media boxes are already belongs to Android), desktop (we just need few things like Remix OS GUI, multimonitors, virtualisation support and Wine), phone (already happen) and tablet (already happen) etc.. everything except servers - where Linux or Unix will prevail.

      Comment


      • #13
        ruthan Well, this is why Android is a VM. Since most servers are set up as VMs, that also coincidentally works in favor of ARM too. Running a bare-metal Linux desktop has always been troublesome for ARM because (at least what seems to be in the perspective of developers) nobody wants to do that. Pretty stupid when you think about it though - Raspberry Pi became one of (if not, the most) sold computer platform, and the reason was because of people wanting a cheap Linux desktop or media center. It wasn't even meant for that, and it wasn't good at it either, but people still wanted it anyway. There's obviously a great market for it, but I guess apparently Android and servers are more cost effective due to less maintenance.

        Comment


        • #14
          Does that mean the Raspberry Pi 2 will be able to boot and run without any proprietary blobs/firmwares/etc? I always have trouble figuring that out from the articles in the ARM space, especially in attempting to track it over time to see how much closer it gets to that goal.

          Comment


          • #15
            Originally posted by schmidtbag View Post
            but unfortunately it seems ARMv8 wasn't involved.
            ARMv8 is also known as AArch64. ARMv6/ARMv7 are 32 bits. Iy's going to be quite difficult to have a kernel which is both 32 and 64 bits (though the 64 bit kernel can run 32 bit user-space programs - there is a config for that). This is similar to the 32/64 bit difference in the x86 world.

            The AArch64 platform is already multiplatform. As you can see here, there is no mach-*, no plat-* and everything is already neatly organised to use the DTS provided in boot/dts.

            Originally posted by schmidtbag View Post
            Understood - I'm aware of the things you mentioned, but there are very simple solutions for that. For example, the kernel could come with all [open source] drivers for all ARM platforms, and there could be a very early-stage init script (maybe even implemented into uboot or whatever) of some sort that loads a config file (such as uEnv.txt) custom to whichever platform you're using. This would define how each piece of hardware is handled.
            No, it doesn't work that way. The kernel has to know very early some specific information about the SoC itself, such as:
            • the kind of bus it uses to connect all the SoC internal component (every vendor has its own bus, and even a single vendor can have different buses or different variations of the same bus)
            • how the critical clocks should be handled (it might be necessary to configure access to the RAM, for example)
            • what is internal the address space and how should it be handled
            • what pins controls what (setting some pins to a specific value may change the way the processor work).
            And so on.

            It's not just a "what driver should I load" problem. In the old days, every platform had its clock driver, its bus driver, its DMA driver... hardcoded. Changing this and adding new platforms is difficult - that's the goal of the device tree but adding such a large subsystem and changing all the existing platform to match the new definition is a very, very long task (have you ever counted the number of lines in the arch/arm subtree?)

            BR,

            -- Emmanuel Deloget

            Comment


            • #16
              Originally posted by AJenbo View Post

              Probably not feasable with Pi1 not having hard float.
              Pi1 is not an ARMv8 (it's an ARMv6) and it HAS hard float.

              Comment


              • #17
                Originally posted by schmidtbag View Post
                I was thinking the same. It's ridiculous how long it's taking. Though there may be some huge technical hurdle, I find it really hard to believe that it would be this complicated.

                I think the kernel change mentioned in the article is a good step forward, but unfortunately it seems ARMv8 wasn't involved.

                The reason why on x86 it is so much easier to have one OS that works everywhere than on ARM is because back in the 1980s when Intel and IBM designed the PC standard (which made x86 popular in the first place) that all modern x86 computers are based on, they designed it very well and standardised many different aspects of the hardware, including busses, protocols, firmware, and, most importantly, a way to discover/probe hardware. This means that every PC-compatible computer out there has standard firmware (BIOS, now also EFI), and common busses and protocols and standards for discovering what else there is in the system. The OS kernel can probe those and detect everything it needs and figure out what drivers to load, how to initialise and use the hardware available, etc.

                On platforms like ARM, none of that is available. Sure, there are some common standards for certain things (like OpenFirmware Device Trees, and the most primitive and basic hardware functions like RAM/ROM access, etc), but most of everything else is undefined and platform-specific. Everyone does their own thing in terms of how the boards are designed and how everything is wired up. This is the main reason why it is so complex and why, when configuring ARM kernels, you have to select support for specific boards, and why support for individual boards has to be implemented in the first place. In many cases, ARM platforms do not even have probe-able busses and protocols, so there might not be any way at all for the kernel to discover what hardware is available to it dynamically.

                It used to be (don't know if it still is, not sure, might be wrong, haven't kept up to date on these technical details) that, on ARM, the OS kernel cannot even discover how much RAM there is on the system (and hence, how much usable memory it has available), and that information needs to be stored somewhere and provided to it via something like the device tree.

                In comparison, on x86, the first thing that happens when the kernel boots is probing all the hardware and figuring out all this information.

                Comment


                • #18
                  Originally posted by tajjada View Post
                  The reason why on x86 it is so much easier to have one OS that works everywhere than on ARM is because back in the 1980s when Intel and IBM designed the PC standard (which made x86 popular in the first place) that all modern x86 computers are based on, they designed it very well and standardised many different aspects of the hardware, including busses, protocols, firmware, and, most importantly, a way to discover/probe hardware. This means that every PC-compatible computer out there has standard firmware (BIOS, now also EFI), and common busses and protocols and standards for discovering what else there is in the system.
                  This also means x86 chips still boot in real mode and have to support 16-bit ISA. Talk about legacy

                  Comment


                  • #19
                    Originally posted by ruthan View Post
                    Future could be iteresting, but is more and more obvious, that future is in Android and universal kernel and apps for Arm and x86 and universal apps which could be used on miniPC (i thing that half of market with media boxes are already belongs to Android), desktop (we just need few things like Remix OS GUI, multimonitors, virtualisation support and Wine), phone (already happen) and tablet (already happen) etc.. everything except servers - where Linux or Unix will prevail.
                    remix is never going to go anywhere. The problem with it is that it is a CLOSED SOURCE fork of AOSP. As far as I can tell, it also does NOT have anything added to support multi-monitor systems. What is really needed, is to have similar functionality added to AOSP. Virtualization isn't particularly necessary on Android from a desktop/enduser point of view, though it could be useful from a power-user point of view (i.e. Android emulator is a VM). Being able to run a virtualized windows is NOT useful for the LONG term, since Android really is going to replace windows... i.e., considering that it OUTSELLS windows, and even MS has given in and started writing software for Android. For similar reason, WINE is not going to be useful long term. In fact, WINE is pretty much useless already. Nothing *useful* actually works in it adequately to.. use. It seems to be mostly a toy for playing video games, which more and more, will be written for ANDROID.

                    Your last sentence is way off base, saying that Linux/Unix will "prevail" on servers (i.e., vs Android). I understand what you *mean* by that, but keep in mind that Android is as much Linux as CentOS. Its obviously a different distribution, with a completely different userspace, but they are both Linux.

                    And for that matter, long term, it isn't that much of a stretch to imagine Android servers. It comes down to packaging the services that you need for Android. I *already* run OpenSSH **SERVER** on my Android devices. Its just a heck of a lot easier to deal with it than ADB. Want to run Apache? MariaDB? Nothing stopping you, they'll all run perfectly on Android if you spend a few days packing it up properly. Note that I am *NOT* referring to running a "debian chroot" on Android. I am actually referring to installing and running those particular binaries on Android *proper*.

                    Comment


                    • #20
                      Originally posted by willmore View Post
                      So, solving this issue is either (inclusive or):
                      1) Someone holding a strong whip hand to force the companies in like (ARM could do this if they cared, but it would cost them)
                      A very strong claim in those parentheses. It's also unsubstantiated.

                      Why would a common low-level specification cost ARM? Is it because the big corps wouldn't obey it anyway? Go their own route? That wouldn't make much sense, considering UEFI somehow manages to exist. This giant ARM rework seems to be indicative of a large desire, seemingly mutual among the big corps that contributed, for booting an upstream kernel, as opposed to doing the bare minimum on the low-level and tossing an Android VM on top. Much simpler to just have a spec already lined up as opposed to doing the leg work yourself.

                      If ARM created a unifying low-level spec for firmware initialization and it just so happened to bite them in the end, then what would it cost them?

                      Regardless of which path is taken, there will be a point where it's easier for the users of these chips to 'fall in line' with the kernals new way of doing things and integrating those changes should get easier. It'll become easier and easier to 'just make it work' and to 'do it the right way' at the same time.
                      Which is clearly why ARM would have nothing to lose by developing a unified boot spec. The big corps do literally zero work outside of implementing it. They have everything to gain and nothing to lose (outside of not being able to do what Intel already does with its creepy low-level WiFi guff should ARM not go that route, but I digress).

                      Comment

                      Working...
                      X