Announcement

Collapse
No announcement yet.

Benchmark That Serial Port On Linux!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Brane215 View Post
    If that's some significant branch of COM port useage, fine.
    Well, I do not have statistics. But after all, someone invented TIOCS2 ioctl and stuff around (struct TERMIOS2, etc), and it means they needed bauds beyond "usual" ones, etc. Tbh, original POSIX api is dumbass, limited and poorly thought in this area.

    But even then it would be nice to have somewhat parametrised test with declared wire length and capabilities etc.
    What one can get as maximum rate depends on voltage swing, wire capacity, receiver thresholds, driving strength of line buffers, amount of ringing, etc and way too dependent on particular setup. Low-capacity cable could probably hit longer distance or higher speed, etc. To make it more funny, line buffers and receivers on ends of link do not have to be same. Can you imagine link can be asymmetric in properties? Not to mention it is good idea to leave margins, esp. if protocol does not implements error detection.

    It's part of existing, quite expensive eqipment that I have to work around. Telling a customer to spit a €20k to buy new gear just so I can feel modern is out of the question.
    Customer got equipment, which was designed with DOS age in mind. I think its wrong to deny this fact? How to deal with it when you can't avoid it is a different question. And speaking of myself, I can imagine it like this: equipment -> uC -> native USB of uC (or uart of uC + HS-USB FTDI to reduce coding) -> PC. Then, uC can do a very precise timestamping without relying on anything at PC side. And uC -> PC proto could have precise timing info added as protocol header. So while receive timings on PC side could/would be skewed, software can get very accurate timings from packet headers and would be able to understand and process data properly. On good side it would allow to use virtually any PC and does not puts tough requirements on PC, software or OS at all. So all parts of system will be easy and fast to implement and would not expose any weird system requirements on PC side. IMHO it what makes uC-based add-on worth of it.

    Obvious disadvantage is the need of uC. But it could be quite trivial logic on firmware side, uC guaranteed to have "proper" timers, clocked by high-speed source, IO is fast and predictable, etc. IMHO, its wrong idea to try to turn PC into uC. PC is very crappy and troublesome uC. And I doubt customer who spent 20K on device would care much about $10 for uC addon. OTOH, decreased development time and relaxed requirements on PC are likely good things to have. And I guess customer would like idea to be able to use just any arbitrary PC without hardcore special requirements. Even notebook which lacks COM ports, etc, etc.

    Or writing a specialised driver within kernel, which would reserve particular COM port, make new "device" and enable access to it through IOCTLs.
    I can see some catches on this way:
    1) Generally, PC may or may not have high-res timer. PCs were never meant for dealing with precise timings. Recent PCs usually have TSC, HPT and so on. But it makes assumptions about PC which may or may not be true. And it can be disabled in BIOS sometimes.
    2) PCs have noteworthy jitters. Cache hit and miss are different things, etc.
    3) PCs aren't great at fast GPIO, your PIC probably could do it both faster and with far more predictable and accurate timings.
    4) What if there will be SMI interrupt? Neither you know how long BIOS SMI service routine going to take, nor you can prevent it on x86. SMM is most privileged mode of x86. SMI can't be masked. At very most, you can detect SMI has occured. Then its way too late to do anything about it. At most you can notice constraints were violated.
    5) Other kernel parts are running and can have own ideas about timings.

    I'm not sure if (relatively rare) failures to fufill what you've promised matter in this particular task (maybe this task is ok with some percent of measures which are fucked up), but generally it sounds like a poor plan to me.

    So how do you use that to synchronise internal clock to something like 1?s or better ?
    No way! It will be much less accurate for sure. Furthermore, some PCs do not even have 1uS resolution of timer, so they are not able to get such accuracy in easy way at all. If I remember, kernel selects best available clocksource, but what would be "best" varies wildly across hardware.

    With ordinary COM port, it's not _that_ hard. You choose one of COM port hadshake pins that can generate an interrupt ( like RTS/CTS) and connect PPS signal to it. Then you program the HW to generate interrupt on pin change. In interrupt handler you can simply record TSC counter ( 64-bit counter with 1ns resolution)
    At the end of day, its wrong idea to expect arbitrary PC to be accurate within 1uS, IMHO. It may work in some particlar cases. And even then I guess it wouldn't be 100% of time but rather "average" performance. Which can eventually spike out beyond 1uS on some occasions, e.g. when SMI happens. Do you know when SMI happens and how long it can take in worst case?

    and make it available to userland through /proc. Same userland can determine and order RTC clock change on next PPS, for example.
    And how long it takes for usermode? What timings of scheduling? What if user mode app doing so would meet paging? And what is guaranteed in such combo? What jitter will you have? What if SMI handler would be invoked at the moment? Can you estimate worst-case timing errors of such configuration? Then, are you sure RTC keeps time with 1uS precision at all? Say, if system RTC internally uses "classic" 32768Hz OSC as reference, it lacks notion of "1uS". Its well below single OSC cycle.

    All in-kernel tasks are here simple, low-CPU load and done without interrupts, in one short timeslice.
    However, there are many bad assumptions about hardware which are generally not true and putting too much constraints on which PC one can use (making it hard to replace in future, etc). Not to mention writing trivial timestamping code for uC appears to be easier than doing something like that on x86, in kernel, possibly for different drivers and comports and then figuring out it not works via, say, USB used as transport. On uC I can be sure which timer I get, I can swear it got right resolution and clocksource for its task and IO is predictable. Hitting 1uS precision in measuring IO timings 100% of time wouldn't be a major challenge. I can even get estimation of interrupt handler attitude with accuracy up to several ticks of main OSC. Something that x86 can't afford.

    Comport useage is usually intermittent ( debug, update etc) and system load then is not a problem.
    I guess it depends on system and what kind of traffic flows via port. I do not have these data at my hand so I have to expect generic case. I also do not know if something bad happens if you screw some of measurements up due to, say, sudden SMI arrival.

    On my Phenom, I can see LPT and COMon their standard port addresses. Likewise with addon PCI card and I suspect with PCIe version it would be the same.
    And my laptop lacks any COM or LPT ports. Most laptops are like this for years. Your solution just not going to work on such configs. IMHO its a major shortcoming of this approach. I can understand stuff like FTDI or bare GPIO on Linux for SPI, I2C or MMC where its average speed what really matters and its not a big deal if particular transition would take longer than others since its sync'd to CLK and CLK would get stuck either. That's why Linux can be reasonable SPI/I2C/MMC master in software mode but cant be a decent "software UART" via same GPIO, and making smth like SPI SLAVE in software would be an issue in Linux, since you can't delay events and external world is not going to wait on pause.

    But even if they were done through MEMIO, as long as there is direct access to the metal and open source kernel driver, it would be fine with me.
    And I think that need to have CUSTOM kernel driver a trouble on the way. Esp. for each and every possible kind of UART-like things in existence. And USB-to-UART bridge is good example of port which isn't mem-mapped either and does not haves notion of direct hardware access in usual sense. Let admit those who used multi-tasked OSes properly via API calls and haven't made strange assumptions and dirty DOS-aged hacks would not suffer from bluetooth virtual ports or usb bridges. And in fact its virtually all software around.

    It matters to me. With such solution I can standardise my purchases and use one chip for many roles and so buy greater qty with better price per part. This means lower manufacturing prices and less hassle with mateerial purchase. And substantial savings. And less problems/costs with servicing.
    Well, speaking for myself, I do not have major issues with supplies. Ofc I prefer to keep BoM sane, but in no way I'm going to stick to just single IC. Especially something like PICs. If I would need some 5V part, I would use AVR. Mostly because I can use GCC to generate code. Though it's hard to tell I'm excited about AVRs. And dealing with 5V is verrrrry special case these days.

    Comment


    • #22
      ...and if someone cares about "bang per buck", I can't stay silent about STM32 uC family. Hell yeah, you can get real 32-bit core, supported by decent toolchains like GCC. Uniform address space, ok for both code and data. Exception handling, catching troublesome conditions fast. Main OSC failure handling, etc. Peripherals which would beat the dust out of PIC and AVR (DMA included, etc). And all this could go below $1 in low-end parts. Welcome to the future! Half of my fellows already thrown their AVRs into oblivion. And as fancy remark about portfolio and scalability: someone recently ported Linux to STM32F4xx (high-end parts of family, 200+MHz cortex M4 with hardware floating point). That's what I would call portfolio & ability to scale.
      Totally true. But:

      1. As I said, ATM I'm tied to Microchip for various reasons.
      2. I hate having 156 different closed IDE tools and 15 ISP programmers for all families I do or might care now or in the future.

      So the plan is to start with I got and work my way toward open universal HW/SW tools.

      Btw, these days 1.8V logic levels are something to consider. There're already some 1.8V-only systems. Over time it would get worse.
      NOt always. 5V parts still have their use. THey have much higher voltage margin and are built on robust geometries, which comes handy for many apps.

      And when it comes to uCs, I prefer to program something like STM32, at least it comes with sane CPU core & memory model, I can use GCC C toolchain I'm familiar with and editor of my choice, uploading firmware via one of FTDI dongles, using STM's builtin boot loader to inject my code, at least initially. Even my notebook can handle it, which is convenient. There is one drawback: STM32 is more complicated than PIC or AVR. Though these days I mostly deal with even larger and more comlpex "application" SoCs runing Linux. I like Linux and it gives me a lot of room for fun.
      I find use of complex tools like C compiler on chips with so limited resources usually an overkill, at east for chips and uses I care about.
      Even on crappy cores, I _want_ control of the system. I like to be able to do several things simultenously. I like controll over time on the instruction timing grain. C presumes far too much for my taste, at least for my useages.

      Sounds reasonable. But there are some things which are not very easy to solve. Say, once you limit current from outputs to prevent frying 'em on short-circuit or levels clashing, you also limit slew rate and making things slow. What you're going to do about it
      1. Design whole thing as very modular concept.
      2. Do individual moules such that they are wel documented and cheap and easy to diagnose and repair. So, if you happen to blow some buffer, replacing would be matter of sub ?1 expense and 10 min of work.
      3. some slew is good thing, epecially when you want to limit over/under/shoot and ringing.
      4. whole thing is meant to minimize wiring and inductances and so allow steeper signals and higher speeds. Big part of that is by utilizing microcontrollers cheapness and I/O prowes. So instead of using one micrtocontroller in central role, it goes for having many interchangeable backend boards, each one tied to one or few soskets without much wiring and inductance in between.

      Good wish. Yet, opensource makes sense when you can get others on your side, so they going to do part of work. I doubt many opensource fans would be happy to deal with proprietary MPLAB stuff and proprietary-minded ecosystem around PICs.
      I plan to do those as well. MPLAB_X seems to be best of the bunch, but it's still painfully bloated, bugged etc.

      - capable of generating as well as acquiring waveforms, with chip programming being just special case of canned algorithms
      In fact, even FTDI can do something like this: FTDI2232H and somesuch can turn into stuff like 4 x 8-bit "ports" if serial bus isn't what you want. Sure, uC could do "pure bitbang" faster and better. But it also takes more efforts.
      True, but far from levels I have in mind. Also, they are too costly to be used like a bullet for particular problem. And they are not as flexible.
      If for example I need the chip to work from 1.8V to 3.6V or 2V to 5V. Or if I need for dual role of particular pin, like A/D and T/S and OC etc.

      Then it should be recognized by tools like OpenOCD and flashrom I guess. Unless you dream to implement load of flash programming algos, jtag/swd stuff, etc - for something like 100+ different ICs, etc.
      Chips within family have very similar programming algoritms. Also I plan to go as the needs arise and to do my FW stuff to make it simple, for me and others.

      IMHO it's not a topmost priority for programmer. Speaking for myself, I do not care about +/-$5 for programmer parts. And those who do, could find ancient PC in pile of garbage and use Just Bunch Of Wires to LPT. It costs
      If you use programmer as programmer, sure. If you use it as a powerfull I/O control, debug, test, instrumentation and even production tool, then things become different. I am, for example trying to adapt some Epson inkjet for PCB printing. To do it right, I need to rework part for paper advance - mechanics and electronics. And printer driver. I came up with a way to do it with micro that woud be pluged on printer's USB hub, so I could control printer as well as the extra microcontroller through same USB bus/tree.

      Board that I'm drwaing as central part of the programmer would be nice fit. It has USB and the I/O needed. And if I mount just components I need, it would be cheap solution. Same goes for other part of the programmer. They are meant for generating and acquiring waveforms, programming part is just narrow piece of the spectrum.

      Sounds like an interesting approach. Though wouldnt it make whole design complicated? And what about reliability? Bunch of connectors on the way of MHz-range signals is a potentially troublesome thing.
      I have some ideas on that part, too.

      Somehow, I recently stumbled on some fancy "digital isolator" ICs.
      Me too. It seems they are better choice, especially since there are models that work with wide VCC ranges. Under 3V, it becomes a problem to turn the LED on. But these isolateres have their share of the problems, since the work through pumping AC signal through internal capacitances. This means that all signals are essentially AM modulated, which brings jitter etc.

      So if I would want it quick, dirty, working tomorrow and able to use flashrom or openocd with it - I would consider attaching one of these digital isolators to FTDI and power one domain of digital isolator from target system and another from FTDI side. I guess you would dislike such solution though.

      I took a look at openocd and I dont like it one bit. I want to do it my way.

      I would agree that uC beats pile of logic ICs. Though I dislike PICs and I actually got bunch STM32, low end parts were below $1 price tag (!!!!). Though cheapest parts lack USB. But very cool things overall.
      I plan to do those transitions later. For now, its important to have initial solution.

      There is one funny catch with all uC-based programmers. Imagine I'm noob and do not have any programmer-like circuits yet.
      So I got idea I need one. I stumble on "that superb programmer" project. And suddenly understand that to program uC I would need... wait, have you ever heard about "pkunzip.zip"? That's what makes uC-based programmers funny . Btw, FTDIs do not have this dumb issue: it just works once connected (and can self-program EEPROM as bonus), making "bootstrapping" easy.
      1. I plan to sell pre-programmed chips. I owuld document protocols etc, jut not the code. With open-sourced, I meant on all components that would enable you to service/tweak/fork the project.

      2. Plan is to have everything modularized, which goes for programming steps also and to have chip descriptions in a form that can be relatively simply manipulated, without the intimate knowledge about each.

      And I got LaserJet on my side, so I'm doing "toner transfer" for ad-hoc stuff, simple prototypes and other "simple" stuff I need here and there.
      I did sort of test-cookie films for thesting resolution. I can get 5/5 mils track/separation through to the developed photoresist without problem and with CUPS tweak I think I will be able to go down to 1 mil, which seems more than adequate, at least for quick prototypes that will be close enough to final version that gets ordered from PCB manuf. I made directed, uniform planar UVB source and plan to have controled developing/etching ( mostly temperature control). And get to good method for resist application ( screen print) and some pro chemicals, especially for resist mask and white silk.

      As for CADs ... KiCad recently got some high-profile features like fancy interactive router, differential pairs creation, matching track lengths (handy for those who deals with, say, DDR memory). And since I do not like drilling, I usually go for "mostly single-sided" SMD, where bottom layer either missing or mostly serves as ground/power plane with minimal amount of vias. SMD is a victory.
      I don't like tools that I can't tweak. I opened souces of gEDA and PCB and i can find my way through them. I opened KiCAD and got lost in all that object pr0n. Besides, it just doesn't feel right with me. gEDA, for all its flaws, can be adapted.
      Last edited by Brane215; 10 October 2015, 09:48 AM.

      Comment


      • #23
        At the end of day, its wrong idea to expect arbitrary PC to be accurate within 1uS, IMHO.
        OK, but that's different subject. I'm saying that full COM/LPT offers many significant features that USB stuff lacks and that quite often products do rely on thiose features.

        And my laptop lacks any COM or LPT ports. Most laptops are like this for years. Your solution just not going to work on such configs. IMHO its a major shortcoming of this approach.
        1. There are COM/LPT cards for notbeobooks for CompactWhatever port.

        2. I'm not "selling" this as universal, only correct way to do it. Just the oposite, I'm just responding to your claim that USB-COM is _the_sh*t_ that's only worth using these days.
        Last edited by Brane215; 10 October 2015, 04:17 PM.

        Comment

        Working...
        X