Announcement

Collapse
No announcement yet.

Lua Scripting Support Being Added To NetBSD Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by frantaylor View Post
    language interpreters are "infinite bug sponges". You squeeze them and bugs pour out. You can squeeze and squeeze for years and you will get an unending stream of bugs. Don't believe me? Look at any "mature" interpreted language, go visit its bug database. This is bad because kernels are supposed to be sanctuaries of quality code, not infestations of bugs.
    I think you are overgeneralizing. First off, Lua has a VERY small codebase. Second off, it matters about the project, not the type of project.

    Comment


    • #12
      Originally posted by frantaylor View Post
      Is this 1985 or something? What is the point of interpreting? Why not just recompile? We all have gigahertz processors and we use SS drives. It's not like it takes hours to recompile a kernel module like it did back in the 80's. We have awesome debugging tools now, so debugging kernel code is not the nightmare it was in 1985. vmware kernel modules compile themselves and install in a minute, and they are very extensive. I think these BSD people live in some distant past.
      This makes a lot more sense now. Interpreted languages take far more processor strength than compiled languages. You have the whole idea completely backwards. Furthermore, one of the largest things that they stated was to make it easier for the average user to be able to interface with the kernel.

      Comment


      • #13
        Originally posted by LinuxID10T View Post
        I think you are overgeneralizing. First off, Lua has a VERY small codebase. Second off, it matters about the project, not the type of project.
        The language is exposing inner parts of BSD to the programmer and so you have to count all of that code area into your analysis. You are going to be taking kernel function calls that previously only been called by other parts of the kernel, and you are opening them up for general use. So you've basically folded the kernel into the language, and every user-accessible API is now going to be passed arbitrary input. In order to make this fly, you are going to need to write a whole new set of regression tests for every single API function that you've exposed.

        You are aware that the code for full regression testing is bigger and requires more manpower than the code it tests? They've bought themselves into a testing nightmare. It's going to be regression after regression because there is no way they will have the time or the manpower or the inclination to do full regression testing before every release.

        Maybe you have seen other instances where programming languages have been mashed up with big existing systems? Have you ever seen one of these that wasn't a nightmare? When I think of programming languages that offer extended capabilities, I think of Flash and Java and .Net, and all of these products are testing nightmares.
        Last edited by frantaylor; 14 February 2013, 05:55 PM.

        Comment


        • #14
          Originally posted by mark45 View Post
          The NetBSD devs noted that they plan to write their nouveau graphics driver in Lua since you don't have to deal with memory leaks any longer, which makes it superior to the Linux implementation. Also parts of ZFS and the audio stack will be ported to Lua for its advanced flexibility and productivity compared to C.
          You're just kidding, right?

          Comment


          • #15
            Originally posted by Pawlerson View Post
            You're just kidding, right?
            They are trying to recreate that "Sun-3" experience because that was the last time anybody used it.

            Comment


            • #16
              The world of software development is littered with dead projects that died, not because of their technical inferiority, but simply because they could not be maintained. This is why microsoft and Oracle and RedHat deprecate old versions of their systems, it's because the maintenance headache exceeds the revenue. They don't want to fix your bugs because it's not worth it.

              When you have no revenue and a massive maintenance headache, you end up putting the poor beast out of their misery because it's just too painful to go on, the pile of bugs gets too deep.

              BSD relies on gifts and people feeling sorry for them, they have no actual revenue stream, no vendor commitment. They will get no vendor commitment, because the vendor doesn't need to commit, the vendor can just grab a snapshot and leave the project flapping.

              Comment


              • #17
                I wonder how much of the internals get exposed vs sysfs on linux.
                On one hand this sounds interesting, on the other I wonder about security...drivers in Lua sound interesting, especially from the standpoint of having one driver that works everywhere.

                Comment


                • #18
                  Originally posted by Ibidem View Post
                  I wonder how much of the internals get exposed vs sysfs on linux.
                  On one hand this sounds interesting, on the other I wonder about security...drivers in Lua sound interesting, especially from the standpoint of having one driver that works everywhere.
                  The things that are configurable get exposed, perfect example...

                  Code:
                  ls /sys/module/*/parameters/
                  will tell you every parameter that every loaded module will accept input for, that you can then control via /etc/sysctl, /etc/modprobe.d/ or kernel parameter line. The nice thing about doing it over sysfs is that there's a certain level of sanity checking involved. You can still break stuff and in theory trash your hardware by pumping the wrong value into the module and it actually trying to do it (ex: telling your fans to run at max speed 24/7--youll kill your fans pretty quick, but you CAN do it via sysfs) but you probably wont crash the driver, unless you hit a bug, because it knows which parameters are ints, or arrays, or bool's and therefore you wouldnt get a driver crash because you pumped a "7" into a Bool, you'd either get an error or itd just ignore the invalid input and drop back to a sane default setting.
                  All opinions are my own not those of my employer if you know who they are.

                  Comment


                  • #19
                    Originally posted by Ibidem View Post
                    the standpoint of having one driver that works everywhere
                    This is the basic concept behind the Hurd kernel. Every program is a driver. The semantics for how drivers talk to each other is well defined. The idea is that you can take the network driver and the ssl driver and the http driver etc and you can string them together from the command line or your interpreted script to make programs. Each driver lives in its own little driver space and it's not allowed to do anything beyond what it's supposed to do. The drivers can be written in whatever language strikes your fancy, interpreted or compiled, your choice.

                    You've heard of FUSE? The idea of FUSE came from Hurd.

                    The lesson here is that if you want to run interpreted drivers, then you need to lift the entire driver mechanism out of the kernel and put it into user space. This idea is kind of shocking at first. Yes all of the drivers. It's all or nothing, or else you end up with terrible performance problems, because passing data back and forth between kernel space and user space is ugly, and drivers need to talk to each other. Really you just map the piece of hardware into the driver's address space and you are all set. You dramatically reduce the "attack surface" of security vulnerabilities because the drivers are not in the kernel, they are running in user space as a very low privileged user. Successful attacks can result in denial-of-service but that's about it. You think about it some more and you realize that there's really no reason for a device driver to have access to everything in the whole computer, its scope is really quite limited, and that's what you get in user space. What you see happening here with BSD is that they see the beauty of the Hurd approach but they don't want to do the hard work of hoisting the device drivers out of the kernel. It's a very ugly hack.

                    The problem with the Hurd is that #1 they didn't think everything all the way through before they started coding, so there are some fundamental performance issues. Problem #2 is that it's a very very long way from "shippable product" quality and it's basically a "pet project" so it isn't getting serious developer manpower thrown at it. However you will learn a lot about computing if you read the design documents, and you can tell that if they ever get around to Hurd 2, it's going to be very very interesting.

                    The other thing to realize about the hurd is that it's one of the few operating system designs that's not based directly on the UNIX and VMS operating systems of the 1970s. If you think about it, the operating systems on all of our devices and computers today, nothing has changed much at all since the late 1970s. If you go back to that time you will find VAX and PDP-11 systems running Unix and VMS and they bear quite striking resemblance to our modern Linux and Windows systems. So Hurd (and maybe some other severely niche OSs) are really the only progress that's happened in operating systems in the last 50 years.
                    Last edited by frantaylor; 15 February 2013, 01:49 AM.

                    Comment


                    • #20
                      I think it is interesting. I can also see it being useful if you make custom hardware and you need to write a driver for, like a toy with a micro controller.

                      That said, it would be cleaner to do the custom driver in user space. In my opinion the right approach would be to expose an API from the kernel to allow for this (like FUSE). It is not like performance is the issue here, Lua or any other interpreted language is not what you would use for performance sensitive applications.

                      I know it gets into the whole monolithic vs microkernel debate. It seems a little bit of a code smell to bring an interpreter into the kernel. How do you justify bringing Lua in and not other interpreters?

                      The entire argument against microkernels is performance. I get it, all the context switch and data copy is expensive. But if you are going to do processing in a slow interpreter, does that matter at all?
                      Last edited by paulpach; 15 February 2018, 03:56 PM.

                      Comment

                      Working...
                      X