Announcement

Collapse
No announcement yet.

Canonical Promotes Ubuntu's Real-Time "RT" Kernel To General Availability

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Canonical Promotes Ubuntu's Real-Time "RT" Kernel To General Availability

    Phoronix: Canonical Promotes Ubuntu's Real-Time "RT" Kernel To General Availability

    Nearly one year ago with the Ubuntu 22.04 LTS premiere came a beta real-time kernel offered by Ubuntu maker Canonical and intended to help with Ubuntu Linux deployments in industrial environments, automotive, and other sectors with real-time computing needs. This Valentine's Day the Ubuntu real-time kernel has been promoted to general availability (GA) status...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    as default for end-users?

    Comment


    • #3
      Originally posted by MorrisS. View Post
      as default for end-users?
      No. The last paragraph covers part of it. Even where it is available, RT kernels aren't going to be the default unless some distro has a focus / spin for RT.

      Comment


      • #4
        I guess I use the RT patches for a long time in Ubuntu Studio on my regular desktop.

        Comment


        • #5
          Originally posted by MorrisS. View Post
          as default for end-users?
          This would not be a good choice. The real-time kernel favors predictability over throughput, and potentially even common-case latency. You generally don't want to run regular desktop or server workloads on a real-time kernel.

          Comment


          • #6
            Depends on your perspective...

            I guess many end-users would like to avoid audio stutter, display swap stutter, input stutter, maybe even need precise time-stamping etc.

            Comment


            • #7
              Originally posted by Veto View Post
              Depends on your perspective...

              I guess many end-users would like to avoid audio stutter, display swap stutter, input stutter, maybe even need precise time-stamping etc.
              If you play with audio tools RT can help reaching lower latency without buffer under runs.

              Comment


              • #8
                Originally posted by archkde View Post

                This would not be a good choice. The real-time kernel favors predictability over throughput, and potentially even common-case latency. You generally don't want to run regular desktop or server workloads on a real-time kernel.
                Why not? Is it "just" a throughput trade off? Or are we talking risk of incompatibility?

                If it's throughput, how much? As an Elixir coder I'm expecting it potentially to be a lot but it would be interesting to get some benchmarks because in my experience predictable latency is in certain scenarios a huge win, even at the expense of severely affecting max throughput.

                Comment


                • #9
                  Originally posted by slalomsk8er View Post
                  I guess I use the RT patches for a long time in Ubuntu Studio on my regular desktop.
                  No, you use Ubuntu's "lowlatency" Linux flavor, which is configured as 1000 Hz kernel tick + full preemption.

                  That still doesn't make it suitable for hard real-time requirements, however calling it soft real-time is adequate, because for typical end-user scenarios, the latencies it can achieve are more than enough (as in low enough).
                  Last edited by Linuxxx; 14 February 2023, 11:47 AM. Reason: Clarify ambiguity...

                  Comment


                  • #10
                    Originally posted by vegabook View Post

                    Why not? Is it "just" a throughput trade off? Or are we talking risk of incompatibility?

                    If it's throughput, how much? As an Elixir coder I'm expecting it potentially to be a lot but it would be interesting to get some benchmarks because in my experience predictable latency is in certain scenarios a huge win, even at the expense of severely affecting max throughput.
                    Ok let us take an example in order to illustrate, we have two sorting algorithms; selection sort, and insertion sort.

                    Selection sort will always spend n^2 comparisons and perform, at most, n swaps.

                    Insertion sort will in the worst case spend n^2 comparisons and n^2 movements. So the case is cut and dry then, yes?

                    Well, no, for in the average case it is n^2 comparisons and n log n movements. And in the best case, it takes n log n comparisons and n movements, that is better than selection sort!

                    Fact is, it is much more likely an insertion sort finishes faster than a selection sort. Yet the selection sort will always finish at roughly the same time, while the insertion sort will be all over the place and lose around 30% of the time.

                    Let us say you need to sort an array of 64*64*64 elements, and this take selection sort ~60ms while insertion sort is between ~45ms to ~85ms.

                    If you need the algorithm to succeed within 75 ms then insertion sort would be a poor fit. If you do not it would be an excellent fit.

                    And yes, losing 20% performance on a desktop is a big deal.

                    Comment

                    Working...
                    X