Announcement

Collapse
No announcement yet.

Linux Kernel Getting io_uring To Deliver Fast & Efficient I/O

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • oiaohm
    replied
    Originally posted by rene View Post
    Turns out the Spectre mitigation is mostly useless anyway :-/ https://arxiv.org/pdf/1902.05178.pdf
    Its not only that the mitigation can be useless and the best mitigation is replace the cpu with one without the defect. If you read white papers like that you will notice these cpu bugs fairly much rip down the barrier between ring 0 and ring 3 so making microkernel protections useless as well in this white paper it partly covered by the javascript vm example. So you are paying speed for absolutely no gain in a lot of cases with Microkernel because it has to implement all the same hardware protections Linux kernel and other monolithic have to as well as taking the extra over head. Please note Linux is not a pure monolithic that Linux can use Microkernels like parts where it makes security and performance sense this started with the user mode helpers.

    If you are going to be insecure either way you might as well be fast. This is the major problem with microkernel it really does not have a fast option.

    Leave a comment:


  • rene
    replied
    Originally posted by oiaohm View Post

    This makes spectre performance loses look minor.
    Turns out the Spectre mitigation is mostly useless anyway :-/ https://arxiv.org/pdf/1902.05178.pdf

    Leave a comment:


  • Happy Heyoka
    replied
    Originally posted by Farmer View Post
    Except for COBOL. Tell me that's dead. Please.
    ah no. somewhere out there is a person that makes more in a day maintaining 50 year old Cobol than we make in a month combined.
    They're having a really shit time doing it, but they own three houses.

    Leave a comment:


  • pal666
    replied
    Originally posted by jpg44 View Post
    AIO also is non blocking and queues data in a buffer, and gives you notification when data hits the disk, useful for disk writes when you need to make sure it got to the disk
    i'm pretty sure you are confusing aio with f(data)sync

    Leave a comment:


  • Farmer
    replied
    Originally posted by oiaohm View Post

    That gets you to the start of the Linux kernel.



    Yes a third type of OS did appear with the SUN Java OS. That is JIT from a verifiable byte-code in kernel space. Linux kernel is getting this as bpf.

    Really the Linux kernel coming the cross roads between micro-kernel, monolithic and verified byte-code in kernel space design these days




    Linus did not have at the start of Linux the idea that the Linux kernel would be a clean model. Performance was important factor in the path Linus has taken Linux down. The Linux kernel early network stack had drivers in userspace and kernel space this was 100 percent not monolithic or micro-kernel it was the start of the Linux hybrid bitzer nature.


    Linux kernel is a true bitzer so when a new model of operating systems appear the developer on Linux look it over and over time bring features of it back into LInux.

    This also makes Linux interesting place for comparing the advantages of different OS models because Linux is pure anything but a mixture of the different design models.

    Yes calling Linux kernel a monolithic only ignores the kernels history. Early Linux was mostly monolithic key word mostly there were fragments of stuff like microkernel in the network stack. Of course later Linux kernel gets uio and fuse.

    Yes proper support for user mode drivers as user space programs. This is like the next generation.

    Then we come forwards to bpfilter where you now have kernel loadable .ko modules that are part userspace and kernel space and bpf. Yes part monolithic, part micro-kernel, part verifiable byte-code as one driver to be handled by the kernel. This is a true bitzer.

    So Linux just progressive more hybridisation of the models. Network stacks very rarely have been pure monolithic, mirco-kernel or bytecode most have been a bitzer to perform well. Linux kernel is these days seams to be following the path how did network stacks manage to be mostly secure and perform well and apply this to complete OS. Yes there is a 4 model that is not Microkernel, monolithic or byte code verified that is where the Linux kernel seams to be heading and that model as yet does not have a clear name. .


    No sorry it not dead companies are still looking for COBOL programmers.
    I'd forgotten Sun's OS.

    That you for that. It was informative.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by Farmer View Post
    Microkernel: start with a clean sheet. Design a "perfect" model. Render it. Figure out where you're taking a kick in the teeth. Alter that bit, and only that bit, and call it an "optimization."

    Monolithic kernel: start with a clean sheet. Design a "perfect" model. Render it. Figure out where you're taking a kick in the teeth. Alter that bit, and only that bit, and call it an "optimization."

    So the "monolithic" turns into a hybrid - it starts with a clean monolithic design and then gets "optimizations" from the other side of the fence.
    So the "microkernel" turns into a hybrid - it starts with a clean microkernel design and then gets "optimizations" from the other side of the fence.
    That gets you to the start of the Linux kernel.

    Originally posted by Farmer View Post
    Then a third type, not available when those two were around, enters the arena to replace those two.
    Yes a third type of OS did appear with the SUN Java OS. That is JIT from a verifiable byte-code in kernel space. Linux kernel is getting this as bpf.

    Really the Linux kernel coming the cross roads between micro-kernel, monolithic and verified byte-code in kernel space design these days


    Originally posted by Farmer View Post
    All having started from clean models. Received "optimizations" to compete.
    Linus did not have at the start of Linux the idea that the Linux kernel would be a clean model. Performance was important factor in the path Linus has taken Linux down. The Linux kernel early network stack had drivers in userspace and kernel space this was 100 percent not monolithic or micro-kernel it was the start of the Linux hybrid bitzer nature.


    Linux kernel is a true bitzer so when a new model of operating systems appear the developer on Linux look it over and over time bring features of it back into LInux.

    This also makes Linux interesting place for comparing the advantages of different OS models because Linux is pure anything but a mixture of the different design models.

    Yes calling Linux kernel a monolithic only ignores the kernels history. Early Linux was mostly monolithic key word mostly there were fragments of stuff like microkernel in the network stack. Of course later Linux kernel gets uio and fuse.

    Yes proper support for user mode drivers as user space programs. This is like the next generation.

    Then we come forwards to bpfilter where you now have kernel loadable .ko modules that are part userspace and kernel space and bpf. Yes part monolithic, part micro-kernel, part verifiable byte-code as one driver to be handled by the kernel. This is a true bitzer.

    So Linux just progressive more hybridisation of the models. Network stacks very rarely have been pure monolithic, mirco-kernel or bytecode most have been a bitzer to perform well. Linux kernel is these days seams to be following the path how did network stacks manage to be mostly secure and perform well and apply this to complete OS. Yes there is a 4 model that is not Microkernel, monolithic or byte code verified that is where the Linux kernel seams to be heading and that model as yet does not have a clear name. .

    Originally posted by Farmer View Post
    Except for COBOL. Tell me that's dead. Please.
    No sorry it not dead companies are still looking for COBOL programmers.

    Leave a comment:


  • Farmer
    replied
    Originally posted by oiaohm View Post
    To make something like a microkernel that can perform against modern day linux kernel will require some very hard choices to optimise it and breaking what it means to be a micro-kernel in places.
    That's pretty normal though isn't it? Design and implementation always pretty much work that way.

    Microkernel: start with a clean sheet. Design a "perfect" model. Render it. Figure out where you're taking a kick in the teeth. Alter that bit, and only that bit, and call it an "optimization."

    Monolithic kernel: start with a clean sheet. Design a "perfect" model. Render it. Figure out where you're taking a kick in the teeth. Alter that bit, and only that bit, and call it an "optimization."

    As long as you don't muddy the entire thing up you're fine. Work on cleaning up the optimizations.

    So the "monolithic" turns into a hybrid - it starts with a clean monolithic design and then gets "optimizations" from the other side of the fence.
    So the "microkernel" turns into a hybrid - it starts with a clean microkernel design and then gets "optimizations" from the other side of the fence.

    Then a third type, not available when those two were around, enters the arena to replace those two.

    Then there are three. All having started from clean models. Received "optimizations" to compete.

    Until yet another arrives. Then there will be four.

    Rinse and repeated. Endlessly.

    How many new programming languages do we have? All designed to be the "one true language" and replace the previous ones.

    It always works that way.

    Except for COBOL. Tell me that's dead. Please.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by rene View Post
    a short note to a long story: i386 and later have an i/o permission bitmap for supposedly fine grained i/o permission control, also modern hardware is anyway not using classic i/o ports, with (nearly) everything memory mapped ring 3 drivers can drive the hardware just fine without any extra i/o context switching..

    Theory vs reality. Iommu that you are talking about with it on you are at best down to 90% of the throughput to the compared compared to iommu off. This is without needing to syscall into the kernel to adjust iommu or having kernel call your userspace driver requesting that it give back memory.

    Originally posted by rene View Post
    Also QNX is quite fast, even was a decade ago, so it is not like it is impossible to do. Also more elegant architecture and algorithms can vastly improve performance, e.g. look at the current graphic subsystem performance.
    QNX was not running current day desktop and server workloads.

    The scheduler things that gave QNX better responsiveness are fairly much mainline Linux now. Please note I said better responsiveness not better performance straight line processing performance..

    Remember how I said Linux kernel is hybrid.


    You see on Linux user space style file systems like micro kernel user put head to head with monolithic style file system driver here. You will also see Linux do this with network drivers as well..

    These ring buffers that Linux kernel is getting was one of the tricks QNX and other Microkernels used to keep up historically against monolithic. So the elegant architecture part of Microkernels that reduced context switches Linux kernel is also getting with ring buffers and other zero copy optimisations. Linux kernel is getting more and more of the microkernel elegant methods of reducing context switch and have the monolithic ugly way of reducing context switches and a more modern way with bpf of achieving a lot of the monolithic kernel performance boosts with less security risk.


    How were microkernels historically able to keep up and beat historic monolithic kernels..

    1) The process to process optimisations microkernels had over monolithic reduced the syscall overhead from application to kernel a lot. This worked out more than the syscalls to the services processing the application request at the time. Problem here is Linux is taking in all of these microkernel gains. So now as a microkernel you don't have syscall saving from application to kernel. Now each context switch between your individual microkernel drivers cost you and put you behind a Linux form of monolithic in performance.
    2) Multi threading. Problem is current day Linux kernel is multi threading and NUMA so it is running multi processes.

    This is the problem. The tricks microkernels had to claw back performance are no longer unique to Microkernels the Linux kernel uses them as well.

    Of course the historic monolithic and microkernels did not have bpf with jit either. This creates a very interest problem. Yes the bpf item breaks the rule of microkernel of all the driver in user-space but if a microkernel like design wants to keep up it will need some for of syscall reduction in kernel space way more complex than just bundling syscalls. This reduction is application puts up request kernel answers from cache with logic if it can resulting in not need to context switch though the driver tree of the microkernel like system of course this is not going to be a microkernel because you will have some of your driver processing in kernel space for performance this in unavoidable. Also when you need to context switch though the userspace driver tree to get stuff done you are going to be slower than the Linux kernel.

    Also remember a monolithic kernel can disable like iommu and other security things to just to gain speed. Being a Microkernel you will have to turn these features on just to attempt to get close to the monolithic when it has the security features on. As soon as monolichic starts giving up security for speed there is no way with current design microkernels to keep up.

    To make something like a microkernel that can perform against modern day linux kernel will require some very hard choices to optimise it and breaking what it means to be a micro-kernel in places.

    Leave a comment:


  • Farmer
    replied
    Originally posted by oiaohm View Post

    Problem with your idea its already been tried in the Linux kernel did not provide the performance boost anywhere near expected. Linux kernel is no where near as simple as you think it is.

    Snipped for size.

    Thank you for taking the time to post that. That was very informative.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by polarathene View Post
    I thought there was quite a bit of discussion about how FreeBSD handles network I/O better than Linux? (no links, or specifics that I can recall though)
    Lately, all times a certain someone stated it there was someone else posting a benchmark where Linux either trades blows or destroys BSD on networking with 10Gbit cards.

    Leave a comment:

Working...
X