Announcement

Collapse
No announcement yet.

Linux Serial Console Driver Lands Patch For Possible ~25% Performance Improvement

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Coming soon from Phoronix:
    - AMD vs Intel serial console shootout
    - ZMODEM transfer benchmarks in the Phoronix Test Suite
    - Effects of compiler optimization flags on VT220 performance
    - GCC 11 vs. GCC 12 for serial port performance - should you upgrade?!

    Comment


    • #12
      Originally posted by set135
      Just to note, the guy was trying to run at 115200, which was getting less throughput on a 'slower' machine, but working as expected on a 'faster' machine. To put that in perspective, most actual serial terminals were lucky to run at 9600, and the most you could pump out the telephone wires (ignoring compression) was 57600. Personally, the hardest I generally drove serial ports was for dial-up modems, and of course for actual consoles and printers much slower speeds were used, so its not too suprising that not many people would have noticed this... I have an actual DEC vt320 hooked up to my machines serial port as a console, and you can configure that to run at 19200, but the terminal itself cannot actually keep up with text at that speed, so I run it at 9600. (It might be able to process SIXEL data at the higher speed... I should try that.)
      The 'slower' machine he tested was a Xeon Haswell based system. Not the latest and greatest, but not ancient history either. I have to admit that based on the patch description I don't understand what the pertinent difference between the two systems he tested on are, but I'd suspect it's some quirk related to the particular model of UART chip used.

      I've got plenty of experience watching server systems boot over the virtual serial console, and lemme tell you, if you forget to change from the default 9600 it's SLOWW. (though as I mentioned in another comment in this thread, I don't know if this change affects virtual serial consoles or only 'real' ones).

      Comment


      • #13
        I think serial will probably be one of the few things that will never die in the tech world

        Comment


        • #14
          Originally posted by tuxd3v View Post
          This are the oldest devices around, from the time of 486's, Pentiums, or newer, by other word's...obsolete

          Nowadays, the buffers are superior to 1KB, I believe, but it will depend on the hardware you have..
          Per wikipedia, the 16550 was originally introduced in 1987, which was actually before the 486. That being said, they are still around. Not in the original form, but some tiny speck of dust in the corner of the system chipset that emulates the original device. So still limited by the 16 byte buffer. On my laptop (that doesn't even have a rs-232 port):

          Code:
          $ dmesg|grep ttyS
          [ 2.332159] 0000:00:16.3: ttyS4 at I/O 0x3060 (irq = 19, base_baud = 115200) is a 16550A
          [ 69.425124] dw-apb-uart.1: ttyS5 at MMIO 0x604110f000 (irq = 20, base_baud = 115200) is a 16550A
          Last edited by jabl; 13 January 2022, 08:27 AM. Reason: Better dmesg quote

          Comment


          • #15
            Originally posted by JEBjames View Post
            The UART FIFO queue can reduces CPU load (fewer interrupts). But even pre-486 PCs can easily max out a 16550 sending a byte at a time?

            Not sure how it could make that big a speed difference as quoted in the article?

            Task switching that broken? Insert Spectre/Meltdown/RandomCPUBugOfTheWeek?

            I'm confused.
            Typically you write to registers that sit on a really really slow bus, or even worse you need to poll if you can write (effectively giving your CPU i486 era speed). One access can easily take some microseconds, which is forever on current systems, so meanwhile other interrupts/work will be queued and you swap around processes and irq handlers frequently.
            This is not limited to UARTS, some modern hardware like Intel's i210 network controllers have this behavior as well. You need to use DMA or things come to a crawl.

            Comment


            • #16
              Originally posted by Alex/AT View Post
              16550 UARTs FIFO support was missing all that time?
              Holy moly. I thought it's 2022 already, but it's more like 1982 there then.
              No. What was missing was FIFO handling of the serial console codeparts in the 8250 driver. Up until this change, the 8250 serial console just put 1 byte into the tx fifo and waited until it was sent to put in the next one.

              Comment


              • #17
                I remember upgrading my two pc's in ~1993 with 16550 uart isa cards, as I had them connected with a null modem cable for file transfers. Switched to 10 Mb ethernet over 10base2 coax a year or so later.

                Comment


                • #18
                  Serial debugging is used quite a bit for embedded development. I guess speed is not _that_ critical, but not having to wait too much for feedback is always welcome.

                  Comment


                  • #19
                    Originally posted by mlau View Post
                    No. What was missing was FIFO handling of the serial console codeparts in the 8250 driver. Up until this change, the 8250 serial console just put 1 byte into the tx fifo and waited until it was sent to put in the next one.
                    The thing is, having FIFO enabled and writing using xmit buffer check basically just fills up FIFO to its highest margin, and then is no different from single byte sending (except you get 15 bytes of latency). It does not need to know if FIFO is present as it's still using pre-FIFO xmit buffer emptiness bit. So still smells like 1982 there.

                    If to carefully read the code, it was indeed using just xmit buffer bit instead of UART_LSR_THRE, and the check for UART_FCR_ENABLE_FIFO is what looks even more weird (so FCR is written with enable FIFO bit somewhere else). Well, it's even more weird FIFO initialization is possible present somewhere, but code parts to honor FIFO were totally missing.

                    The new 'flow control check' part disabling FIFO usage also seems unnecessary and excessively weird to me as I have written 16550A DOS applications and no, FIFO will NOT send anything to the wire when h/w flow control is used and corresponding signal line indication is raised. Maybe differs per UART implementation/emulation as well nowadays, but I heavily doubt it because the logic here is not honoring flow control signal may lead to data loss if FIFO attempts to transmit anything (the receiver FIFO buffers may be full or rcv buffer IS full, that's the point of h/w flow control to ensure transmitter waits until it's read on receiver without any need for software to read fast enough or intervene).
                    Last edited by Alex/AT; 13 January 2022, 05:20 PM.

                    Comment


                    • #20
                      Originally posted by Alex/AT View Post
                      The thing is, having FIFO enabled and writing using xmit buffer check basically just fills up FIFO to its highest margin, and then is no different from single byte sending (except you get 15 bytes of latency). It does not need to know if FIFO is present as it's still using pre-FIFO xmit buffer emptiness bit. So still smells like 1982 there.
                      Well, it needs to know the size of the FIFO buffer so it knows how many bytes it can write before switching to waiting for the buffer to empty? How else would it be done?

                      If to carefully read the code, it was indeed using just xmit buffer bit instead of UART_LSR_THRE,
                      My reading is that the old code is basically

                      Code:
                      1. Wait for LSR_THRE
                      2. If flow control is enabled, wait for MSR_CTS
                      3. Write 1 byte.
                      4. GOTO 1.
                      New code (only used when flow control isn't enabled, otherwise the above old code is used) is

                      Code:
                      1. Wait for LSR_THRE
                      2. Write up to fifosize bytes
                      3. GOTO 1.
                      and the check for UART_FCR_ENABLE_FIFO is what looks even more weird (so FCR is written with enable FIFO bit somewhere else). Well, it's even more weird FIFO initialization is possible present somewhere, but code parts to honor FIFO were totally missing.
                      Looking around the code, it indeed enables the FIFO, if present, as part of initializing the device. It seems as if the FIFO was used in other parts of the serial code, just not when writing to the serial console, which this patch fixes.

                      The new 'flow control check' part disabling FIFO usage also seems unnecessary and excessively weird to me as I have written 16550A DOS applications and no, FIFO will NOT send anything to the wire when h/w flow control is used and corresponding signal line indication is raised. Maybe differs per UART implementation/emulation as well nowadays, but I heavily doubt it because the logic here is not honoring flow control signal may lead to data loss if FIFO attempts to transmit anything (the receiver FIFO buffers may be full or rcv buffer IS full, that's the point of h/w flow control to ensure transmitter waits until it's read on receiver without any need for software to read fast enough or intervene).
                      You may well be right, I guess this is an attempt to be conservative and not modify anything that the author hasn't tested. The author was using a server platform, I'd guess usage of flow control is rare there (I've never seen it).

                      Comment

                      Working...
                      X