Announcement

Collapse
No announcement yet.

Google's Fuchsia OS Magenta Becomes Zircon

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Creak View Post
    I'm pretty disappointed by this news, actually. It feels like Linux failed somehow.
    I'm pretty happy about this news, because finally there is hope. Linux is full of "gorgeous C", which means, hard to read and maintain code (raw pointers).
    It's simply too much C in Linux. Same story with the "modern" Wayland. It's a horrible C lib, which is hard to use, and which forces one to use their "dangerous" paradigm. One will say, but wayland is just a protocol. Sadly not, you are forced to their libraries (libwayland-client) if you want to write a wayland client that runs on weston.

    The big problem with linux is, is that it had a tone of C programmers which dont understand the power and use of c++ and on top of that they are so ignorant being even proud of "not liking c++". All the famous bugs and leaks (heartbleed) are caused the gorgeous C. With all the experience of the years I came to discover a quasi-joke: it sounds like a joke but it's not.

    Q: What is worse than a C beginner?
    A: An advanced C programmer!

    Comment


    • #22
      If this replaces Android, that would mean no more Linux on mobile devices. Drivers support is far form perfect, with this it would be even less possible, in my opinion not a good thing. I still hope to run mainline kernel on my smartphone one day...

      Comment


      • #23
        I have the same opinion as jrch2k8 :

        The main advantage for google are going to be the drivers.
        This is a micro-kernel system. i.e.: a kernel as tiny as possible that mostly handles IPC and nearly everything else as servers.
        That means that all the proprietary device-specific crap can go in a couple of independent closed source binary servers.
        If google wants to upgrade the system, they can upgrade any of their servers running the base parts independently. They can even upgrade the micro kernel itself.
        And then every thing will continue to work the same, even the closed source drivers, as long as the IPC API is still compatible.
        There is no ABI dependence.

        Compare the situation now :
        ARM platform manufacturer (usually chinese assembler of smartphone motherboards), will clone the kernel version that happens to be in the most popular android version du jour, and slap proprietary binary drivers on it (GPU, etc.)
        4 years later, and you're still stuck with kernel version 3.4.x because that was the one made by the manufacturer, even if kernel 4.14 is available by then.
        This old version is by now riddled with exploitable bugs but google can't do much, as they don't control all parts of the monolith.

        Manufacturer are currently freezing platforms to specific kernel version and preventing Google from even being able to fix these problems.

        By going Fuschia / Zircon, Google is moving from a situation where upgrades mean "find a magical way to upgrade the kernel while keeping this .ko that was written against kernel 3.4.108" to the equivalent of "upgrade the kernel while keeping this apache server functional".

        Google can finally easily be incharge of which kernel version (or in this case, which version of system servers) their android upgrades are going to run on.


        BUT

        this whole thing has also an ultra-massive problem.

        Fuschia is not linux. (well obviously...)

        The industry on all scale (from IoT through smartphones all the way up to high performance clusters) has a massive experience investment in the Linux kernel.
        It's going to be hard to persuade smartphone manufacturer to switch to an entirely new and unknown kernel.
        It's going to be hard to find devs with experience in a kernel unlike anything else used in ther industry.

        (That would be like trying to get smartphone hardware manufacturer to write drivers for Windows 8 mobile instead of Linux, and we saw how popular that one went).

        Comment


        • #24
          I didn't check the code myself but it's said to use a object-capability security model. Linux is not on the same ballpark. It's time Linux is replaced by a truly secure system, and this might be it. There are plenty of reasons to want to replace Linux everywhere as soon as it becomes feasible, so let's hope Google succeeds. No more complicated security context hell that leads many sysadmins to simply disable security altogether. No more bloated monolithic kernel. Finally being able to run untrusted code (aka all code, including your own, really) with confidence it just can't escalate privileges.

          I say about damn time.

          Now let's hope programming platforms evolve in the same direction. Yes, rust, I'm looking at you.

          Comment


          • #25
            Originally posted by unixfan2001 View Post

            Good luck with that!

            Desktop Linux supports all sorts of hardware, this kernel here doesn't. I can't see the likes of NVIDIA, AMD and Intel invest time and money to prop up driver development for another kernel. The fact that those drivers sit in user space wouldn't decrease work, I'm afraid.
            This. It won't replace Linux in general but most likely could on Android. Phones and tablets have limited amounts of different hardware and combinations anyway so it doesn't matter if Google would never implement hardware support for remotely as many devices as Linux now supports

            Comment


            • #26
              Originally posted by LEW21 View Post
              which makes sandboxing and containers difficult to implement securely. And it contains a lot of concepts - like user accounts - that are not relevant or not flexible enough in some cases - like containers.
              Somebody has definitely not been paying attention to what LXC can achieve.
              Easily, thanks to all facility built into a current kernel.

              Microkernel will make it possible to turn them off, limit access, or switch for other implementation on a per-process basis.
              Nope, micro kernel has nothing to do with "switching implementations".
              Micro-kernel do give A POSSIBLE implementation about how to help doing that (have several concurrent system serves running each providing a different implementation).

              But nothing has ever prevented Linux from having multiple implementations :
              - each ext[n] driver is backward compatible with older bit formats. meaning that you can access a ext2 format with any kernel module of your choosing : ext2.ko, ext3.ko or ext4.ko (Well for as long as they exist. Now the older are progressively being phased out).
              - filtering is currently handled by 2 different facilities in the kernel. Youcan have either old-school "ip tables" and the newer "net filter".
              - at some historical point in time, one used to find both OSS and ALSA audio drivers.
              etc.

              Of course, due to some technical limitations (e.g.: the ext file system isn't a clustering file system, a partition cannot be opened for read and write by several drivers at the same time) it might not be possible to have different programs each using a different implementation at same time.
              but that limitation appliess equally, no matter if implementations are .ko module in kernelspace or a system server in user space.

              (i.e: even as of today you can't use several user-space running servers FUSE filesystem drivers on the same non-clustering file system.
              Even if FUSE isn't a monolithic kernel driver, but rely on user-space running servers just like micro-kernels).

              Fortunately, Google accepted this responsibility.
              Google is a dwarf compared to the huge quantity of actors all pulling efforts into the Linux kernel.


              Comment


              • #27
                Kinda fun to read all the optimism RE Microkernels in here.
                I used to be like that myself.

                Let's be honest though. There is a reason all major operating systems are either monolithic (Linux and most BSDs) or hybrid (Windows, Mac OS, Dragonfly BSD, Haiku).
                A pure microkernel design is quite a bit more difficult to comprehend and deliberate about.

                GNU/Hurd got almost nowhere. MINIX isn't exactly prospering outside Intel Management Engine development. I'd argue that the only ever real successful microkernel was the Amiga kernel and that one is a rather atypical microkernel.

                Comment


                • #28
                  Originally posted by jntesteves View Post
                  Finally being able to run untrusted code (aka all code, including your own, really) with confidence it just can't escalate privileges.
                  That may sound cool in theory but no such implementation exists and probably won't for a long long while(and trust me they have tried through the years, MicroKernels are nothing new or revolutionary).

                  I know Micro Kernels theorists are all Bonnies and rainbows and everything is secure and possible but through the years all have failed(including several military specific implementations) because neither the hardware(again even military specific hardware was tried through the ages) or the software are 100% perfect in practice.

                  The only Micro kernel hybrids uses that have been proven successful through time is to allow better ABI stability to allow hybrid user space horrible closed drivers and certain speed advantages to implement new subsystems because driver are externally managed <-- main reason Google is looking at this I guarantee it.

                  Even rust can't and will not guarantee such a thing as "confidence it just can't escalate privileges" because is just not possible, it can help to mitigate some cases but never fix it and in some cases you can even scale privileges from hardware without OS intervention or knowledge just so you know(again neither hardware or software can be infallible)

                  Comment


                  • #29
                    Originally posted by cipri View Post
                    All the famous bugs and leaks (heartbleed) are caused the gorgeous C.
                    I guess Microsoft can thank cpp for the high-reliability, bug-free system that is Windows.

                    Safety and reliability in computing is a real-world hardware issue. So long as you address memory with a pointer instead of a pointer + length, you'll see the same category of bugs affecting security and stability. Cpp doesn't change this, only abstracts around it. And adding more LOC just means more bugs so it does more harm than good. Some languages (java\go\rust...) pay the price for emulating a safe system and do end up with more reliable software. But again, there's a price for faking hardware features with your runtime and when you go down to the kernel it's rarely a price people are willing to pay.

                    Comment


                    • #30
                      Maybe related ... Fuchsia was just been proposed as an oficial Golang port.

                      https://groups.google.com/d/topic/go...Fdc/discussion

                      An interesting point, is the OS net stack being powered by Go.
                      Last edited by EduRam; 17 September 2017, 05:23 AM.

                      Comment

                      Working...
                      X