Announcement

Collapse
No announcement yet.

Benchmark That Serial Port On Linux!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    In linux, writing to the system console is a blocking operation. If you have kernel logging to the console with a serial device - then you will have iptables (etc) blocked waiting for log messages to write to serial.

    i.e.

    vi /etc/grub/grub.cfg
    kernel ... console=ttyS0,19200n8

    iptables -A WHATEVER -j LOG ...etc


    Comment


    • #12
      Originally posted by Brane215 View Post
      RS-232 is not designed for those speeds.
      RS-232 is an ancient protocol from dumb terminals age, with very inconvenient bipolar signalling levels and other prehistoric crap.

      Generally, serial port ("ttyS0, ttyUSB0, etc") is at least 2 different parts: "UART" part which does framing and clocking and then physical level, which is another story. It could be different and do not have to be ancient RS232 bipolar levels all the time. It can be 3.3V CMOS, resulting in "debug UART port" on embedded devices, or some other levels. Attach UART to differential line buffer and you'll get RS-422 or RS-485 (they have some differences, but core idea is UART attached to differential line).

      Technically, USB to RS232 dongles are normally at least 2 ICs: first IC is USB to UART bridge like FTDIx232, PL2303, etc. This IC would output 3.3V CMOS UART unipolar signals. Or sometimes 5V CMOS levels. Because it internally digital CMOS IC, just like anything else. And outputs common digital I/O levels. Then there is a second IC, a "line buffer" or "line driver". This one shifts levels from 3.3V or 5V logic to bipolar +/-12V typical for UART. Example of such IC would be MAX232 (5V-only) or MAX3232 (can deal with 3.3V levels either). As you can guess there are many MFRs doing bridges and line buffers. Though full RS232 with all signals needs a bit more complex MAX232-like IC since classic MAX232 only comes with 2Tx and 2Rx buffers, which is not enough to gate full RS232 signals.

      Technically such IC a "charge pump" wihch doubles levels (5V -> 10V and 3.3V -> 6.6V, both are within RS-232 specs) and also provides inverted voltage for negative part of true RS-232 waveform. This usually requires 4 capacitors around. Btw, with this knowledge one can easily assemble their USB-to-232 dongle on their own - fairly trivial schematics where you just connect 2 ICs together and both bridge and line buffer these days are often designed to keep external components count to a minimum.

      Btw, chinese mfrs invented nasty "optimization" to make dongles cheaper: they do not use line buffer and their bridge gives out 5V CMOS unipolar logic directly as ... "fake-RS232" (requires logical-NOT inversion of signal, bridge IC does it internally). Since most equipment these days actually uni-polar logic and does not cares about negative part of waveform, it (sometimes) works. Yet, it violates RS232 specs and would not work with some equipment. What even worse, bridge IC isn't fond of getting negative part of wave into 5V logic input. So it can die in a matter of days to months since digital 5V inputs aren't meant to withstand negative -12V indefinitely. So if you see some super-cheap USB to RS232 dongle in some chinese online shop, beware: these cheaters saved cents on proper line buffer. And it is not a "wow, superb deal!" but a "chinese roulette" instead which can cause a lot of troubles and really unreliable.

      Who cares for high speeds if connection is not reliable ?
      RS232 was never meant to be reliable. It lacks error checking. At most it got parity bit as option of framing - really weak protection against signal corruption in the wire. There is no stronger CRCs, etc or corrupt data retransmissions defined (like USB does). One can implement own custom protocol on top of UART framing if it's what they want. But it is not part of standard, so...

      Whole point of RS-232 is either compatibility with old machines or robust communication over some length.
      Whole point of RS232 is the fact it is outdated crap from dumb terminal ages with defective/troublesome physical layer. Which is only used as such for compatibiity with old crap.

      If you have a couple hundred meters of good wire and a bit of environmental noise, you'd probably work at 9600 bps or similar speed.
      On other hand, Ethernet (which uses more sophisticated low-voltage differential signalling with zero-DC coding) can do 100Mbps and even 1Gbps on 100 meters distance over widespread UTP. Orders of magnitude better distance to speed tradeoff and reliability. But its far more complex thing as the result. There is obvious tradeoff between speed and distance and advanced line coding and differential lines improve it. RS232 was designed in way it have to re-charge whole line (no zero-DC "line coding"). This makes it really bad in terms of speed to distance ratio. Then it also non-balanced, non-differential line. This means any external noise (EMI, etc) can damage signal on the wire easily. Idea behind diff lines is that receiver only cares about difference between wires voltage, not absolute values. Noise affects 2 close wires in almost identical way. So receiver can subtract it and only gets original signal. This allows for lower voltage swing (==more speed) while retaining reliability. Now take a look on USB, SATA, HDMI, DisplayPort, Ethernet... do you notice they all follow some pattern?

      If one wants to keep it simple while going some distance, better option could be differential line, resulting in something like RS-422 or RS-485. This one can go reasonable distance retaining Mbps or at least 100's of Kbps speed. Though it lacks zero-DC coding on its own and too simplistic. So it would not beat more advanced things like Ethernet.

      FTDI type solutions tend to be cr*p because they defeat whole PnP concept of USB, since you don't have a way of telling two cips apart, when used in different gadgets.
      It really depends. If you manufacture your devices, and you know it haves some well-defined function - you can write proper and unique USB descriptor to EEPROM: FTDI and most other bridges would allow it (some even come with built-in EEPROM area inside bridge IC). As long as you can afford your own VID & PID. Then software can be aware it's your device, not somethnig else, if desired. Though user-mode programs would need to go beyond UART abstraction and learn how to scan USB bus and find right VID/PID (e.g. via libusb, etc). UART on its own is not a plug-n-play. And it is UART propery, not FTDI's or whoever.

      As example: some weather station comes with FTDI soldered as usb to UART bridge. Station exposes USB externally, and internally bridge wired to station's microcontroller, allowing uC to communicate PC in simple ways. And while it looks like UART (ttyUSB* under Linux), you can also do bus scan for unique VID/PID pair either, if that's what you want. In Linux you can, say, set up stuff like udev to launch program once particular VID/PID appears on the bus. This allows plug-n-play. Say, you can run daemon which collects data from this particular weather station and ignores other UART-like devices. Catch? First you have to go beyound plain UART and learn about VID/PID. And imagine you've got 2 similar weather stations. Now just VID/PID knowledge wouldn't be enough :P. But its tradeoff. Either your program knows nothing about USB and so no PnP, or go read USB specs. "Just" some few hundreds of pages and you're done! .

      With ordinary USB, it is simple as each inplemetnation has to have distinct VendorId and Model_Id.
      It really depends on what you want. Generic UART and RS232 is an universal thing. You do not know what was attached on other end of wire, regardless if it was PCIe card or USB dongle. Can be modem. Or some terminal console. Or some equipment with custom protocol. If you implement some device which pretends to be UART and you want to give more precise hint what is it, you can program appropriate USB descriptor into bridge's EEPROM. FTDI even comes with fancy Linux tools to do it, for example .

      Which frequently doesn't work as intended, even on tandard speeds.
      First, UART is not plug-n-play. Second, USB haves different timings. And multi-tasked OSes aren't bitbang's best friend either. So DOS-aged techniques are obsolete. Time to admit it, its 2015 after all.

      baudrate out of it since it has some hack with frequency division. It has Oxford Computer chip.
      Hack? Hmm, does Linux's TIOCS2 ioctl fails to work with this driver, or what? Using TIOCS2 one can request ANY baud rate, as long as hardware can do it and driver knows how to set its hardware up, no any hacks req'd (except Linux-specific ioctl). So, most FTDI-like dongles can go for really uncommon or very high bauds, etc. Though one may want to RTFM bridge IC datasheet to get idea what it can do. I.e. what is clock source and which divisors you can get, and which baud rates you can achieve with these combos, subject to ~2% baud rate error tolerance.

      This is part of the reason why I suggested bug tests. WRT to higher speeds, one has to be carefull with the conditions. RS-232 is good up to couple hundred kbps and even that on perhaps tens of meters.
      There is obvious tradeoff between line speed and line length and pure RS232 is bad in terms of bitrate to line length ratio. It just ancient and nearly worst line coding technique one can use at all. Sure, it wouldn't do 1.5Mbps on 20 meters, neither RS232 levels, nor CMOS levels. But if you connect two 3.3V uarts using 30cm wire, you can expect 1.5Mbps to work, as long as both sides can talk this baud rate. With some error correction I've ran 3.3V CMOS UART at 921600 (230400 * 4) at about 1.5 meter distance. Though high-speed things are much better with differential lines and zero-DC coding :P.

      How would you connect to such combo say GPS receiver module with PPS signal ? PPS uses hadshaje signal of COM port as edge-triggering interrupt source on seconf lapse
      In Linux you'll be able to see RS232 line transition using proper usb-to-232 dongle. Both PCI-e driver and USB bridge driver must set port status structure appropriately. And programs can access these data, they would not even know which driver did it and what kind of hardware it was. But you'll have to live with the fact your time only comes with 1ms accuracy at very best due to USB framing.

      and latency and jitter are important there. On ordinary COM port you can get latency&jitter probably on the order of couple hundred of ns. USB-COM solution is bound to be many times worse.
      Unless you run single task OS, there is task scheduling to consider and many other things. Basically, full-fledged general purpose multi-tasked OS running on application processor is not meant to deal with nanoseconds precision without extra hardware support. If one really needs nanoseconds, it much better to offload it to external microcontroller or FPGA. And on your x86 ... imagine someone like chip set has elected to do SMI call, and it took a while. There is nothing OS can do about it, it can't even disable SMI - at very most OS can get idea SMI handler took some CPU time. You can't really rely it would took "less than X". Neither you have BIOS source, nor board schematic. Therefore you have no idea what SMI handler can do and when it can kick in. So, good luck to expect nanoseconds precision.

      Wrong. Whole point of RS-232 is voltage swing, which gives it robustnes.
      OTOH large swing on long line == you can't recharge line fast. Hence hitting speed limits. Whole point of differential lines is to use low voltage swing while retaining noise immunity. And zero-DC coding allows not to care about recharging whole line either - line does not accumulates charge, in first place.

      With 3V and short distances one can get better speeds other ways when needed.
      Right. But UART is simple. And just connecting 2 UARTs is simple. And only takes 3 wires as bare minimum. It makes it convenient.

      you can get equivalent for expansion card, whatever flavor of PCIe it is nowadays.
      My notebook comes without user-accessible expansion cards. It uses PCI-e for wi-fi internally and I'm not willing to remove it. And overall PCs are not meant for nanosecond precisions. As simple as that.

      This brings you the I/O closer to the "metal" and you don't have to muck with USB bus tree...
      If one wants somehing close to metal, microcontroller would be far better option. It can run single-tasked firmware without NMIs, SMIs and task switching issues, reaching nanoseconds range.

      Dumb shit ? WHo cares. I have old Xeltek Superpro III programmer with 80-pin drivers that works perfectly with many non-ISP chips.
      Well, I do not care: I mostly deal with either SPI/I2C EEPROMs/flashes, JTAG, some embedded designs debugging, etc. And for the real, modern microcontrollers are all ISP capable and/or come with boot loader. Same goes for larger devices.

      I certainly wont throw it away just because it uses LPT in "stupid way".
      Hardly my fault. It would only work with a very specific subset of hardware and software and it would put very annoying limits on OS choice and PC choice. Not to mention it probably unmaintained at this point and being able to program only some ancient parts... hmm, I'm not a hardware necromancer.

      And BTW, for soem other apps, why exactly is direct pin drive that stupid if done within kernel driver ?
      To do more or less precise timings you either must disable task switching and bring everything to a halt or go further and use single-task OS. This is fundamentally wrong for multi-tasked systems. If you're not so critical about timings... ok, Linux can do, say, MMC host in pure GPIO, and it works, often 100kHz to MHz range even on small systems. But you see, mmc/sd assume clock signal. And if whole system got busy, GPIO based clock stops either, hence no signalling violation occurs, it just suspended for some time. But it does not comes with nanosecond-range warranties. And its also taxing on main CPU. Generally its wrong way of doing I/O, only ok for slow speed or as last resort solution. There are much better solutions these days. FTDI is one of them, btw.

      Many boards have IEEE-1284 ( LPT's last reincarnation) even today.
      And once more, my notebook lacks it. So I can't do it on the go. I see it as major limitation of such techniques. OTOH FTDI or even custom uC can swing by GPIOs in desired ways using notebook, and even my smart phone used as host. Later warrants fancy faces around

      1284 ecen has "fast" bidirectional transfer and buffering.
      Speaking for myself - "let it float by itself, stupid piece of iron".

      is likely that SW that works with COM expects to talk to 8250/16/4/5/50 and doesn't care for USB-COM.
      This also assumes some ancient dos-aged software. Which sucks a lot. Realistically speaking, usb-com things are used to communicate industrial or networking equipment, deal with old UPS and so on. And these do not really need sub-millisecond timings. Not to mention such timings are generaly some issue in multi-tasked OS.

      Comment


      • #13
        @SystemCrasher:


        My point was that RS-232 is nowadays never used for its speed, so benchamrking it makes little sense. Who cares if your drivers can do 230 kbs or go to 1M and over which piece of wire. There are other solutins for that. But once you get stranded with RS-232, there is usually some reason, which is often compatibility with existing equipment, sometimes price ( cheap drivers for trivial apps etc).

        In such cases, other factors come into play, like buffer depth, buffer control, bugs and errata, ESD protection , baud rate span ( more in the sense of catching non-standard speeds than drag-racing).

        While USB-COM might be useful in some situations, it is far from universal solution as it is basically Platypus-like creature - neither bird nor mammal.

        RS-232 si good for byte fiddling with tight timing controlls with little complications. USB packetizes and bundles the traffic, so it kills all that.

        In my app I have existing gear that sends me a group of N * 2 bytes ( back to back) every X seconds. While receiving, all I can do to syncrhonise with a group is to set UART to send me each byte ( shallow buffer) and to record and measure the time between successive bytes. USB totally kills or at least severely hampers that.

        Not to mention all extra points of failure this gymnastics through USB brings. It might be fine for some FW update or kernel console, but full, classic COM port offers more than that and that stuff might come very handy while working with microcontrollers.

        On micros, UART, SPI, I2C etc are cheap. USB is expensive, both in chip price as well as in development cost. If you have to muck around with USB stack, fine. BUt why go through all that needlesly when you can just shove a byte or two over MAX232 for debug ?

        This shows on component choice my programmer. Good 8-bit cheap micros with plenty I/O sell for peanut per metric ton. But once you demand even slow USB, prices skyrocket and number of choices drop. This is why I went with PIC16F1459 on the USB side of my programmer. It's cheapest choice with USB interface. I'll connect it with micros like PIC16F1527 in QFP64 through SPI port ( optically separated), for backend work.

        IOW, I plan to use each protocol where it's good. USB on the side of communication with PC, since programming something like modern 64Mbit FLASH might take much too long over RS-232 and SPI within the programmer.





        Last edited by Brane215; 07 October 2015, 06:46 AM. Reason: Edited away just that annoying bold attr over entire text...

        Comment


        • #14
          In some industries, serial ports are still used and is all a matter of taking care of Phoronix Test Suite commercial customer needs
          Actuall the only industry that doesn't use serial ports (that much) is the desktop pc.
          Even pc servers have serial consoles.
          The ordinary TV has a serial port (accessible for maintenance or just plain in sight). My harman kardon had a serial port.
          My herculus universal dj has a usb midi port which it bridges to bluetooth rfcomm (serial port profile).
          Currently I have at least 7 usb to serial bridges in use for my personal cloud to get to the console.

          So I welcome anything related to serial port.

          Comment


          • #15
            Originally posted by Brane215 View Post
            BTW, I'm drawing my own USB programing tool that is to have microcontroller and work over USB.
            Valid idea either. But requires to put more efforts on learning about USB and coding firmware. FTDI allows to get "UART", "MPSSE" (multi-protocol serial engine, can do I2C, SPI and somesuch) or just "bunch of GPIOs" with fancy libs in easy ways, so you can do GPIO or common low pin count BUSes or just UART easily. Without coding firmware. More simple USB bridges are less funny and can do just UART, sure. Yet, since most uC's and small embedded boards&devices have UARTs, it can be interesting option anyway, e.g. for debugging, recovery or just mesing with built-in OS of router, TV or whatever (there is often root console on UART and plenty of debug output, so those familiar with Linux can have some fun).

            I see that first instinct of many DIY-ers is to reach for FTDI.
            FTDI haves limitations, sure. Yet it's good solution to my taste, because it allows to do some things easily. And works well in Linux. They even got some utils to program USB descriptors to EEPROM. Linux included. FTDI2232(H) or 4232(H) are also used in many cheap (but reasonably fast thanks to MPSSE!) JTAG adapters. Which can be used with more or less common OpenOCD rather than some proprietary crap. IMHO, FTDIs are good for low-cost debugging/programming/etc tools which do not rely on archaic crap like LPT, do not require to plug card (which isn't going to work for notebooks), do not imply it should be single-tasked OS, etc (that's why transfer needs buffering).

            I hate them as they are expensive
            Valid point, but it does not matters for custom devices. If you plan to manufacture >10 000 units, reducing costs could be worth of it. If you're DIY or low-scale project, saving $5 for one of few devices at expense of extra workhours... hmmph, working at something like $1/hour? This looks really silly, unless it something like "I want to learn how USB works", etc.

            and they dumb-down whole PnP concept of USB.
            You can program your own USB device descriptor to FTDI as well as many other bridges, if that's what you want. But sure, it would stay just serial engines and GPIO. You can't turn FTDI into keyboard or mass storage device, sure.

            I decided to do my own USB stack on PIC16F1469 ( for USB)+ extra PIC for I/O.
            Speaking for myself I dislike PICs. PIC16 arch sucks balls. And two ICs? Hmm, I do not get why you can't just grab single uC with USB and enough of GPIO.

            Yes, I know that there are many other choices, but I'm limited to Microchip at the moment. Those PICs are cheap.
            Speaking for myself, PIC is last arch on planet I would consider to deal with. Atmels are equally archaic but at least they got more pleasant system architecture and gcc would work for these. STM32s both cheap and are a real powerhouse when it comes to peripheral. And since I do not target 10 000+ devices at the moment, so I do not have to bother myself about saving few extra cents at expense of some major headaches.

            ...and on side note, "musb" OTG for Allwinner SoC's just landed to mainline 4.3-rc kernels. I guess it's time for me to go and have some fun with "usb-devices", too. Though in really different ways. Uhm, except anything else IIRC Linux can also pretend its virtual serial port device .

            Comment


            • #16
              Originally posted by djzort View Post
              In linux, writing to the system console is a blocking operation. If you have kernel logging to the console with a serial device - then you will have iptables (etc) blocked waiting for log messages to write to serial.
              If you have ever done serial consoles on linux you must know that any kind of flow control on the serial port must be turned off.
              Since people usually also start a getty on the serial console they must configure that getty correctly.

              Comment


              • #17
                Originally posted by SystemCrasher View Post
                Valid point, but it does not matters for custom devices. If you plan to manufacture >10 000 units, reducing costs could be worth of it. If you're DIY or low-scale project, saving $5 for one of few devices at expense of extra workhours... hmmph, working at something like $1/hour? This looks really silly, unless it something like "I want to learn how USB works", etc.
                Problem with canned solutions is, amongst other things, that mettalic aftertaste from the can. ;o) If you use everything that way, your added value gravitates toward 0 and so does your $$$. Even on small series, I want to have things that really work and that can pack nice bang per buck. Trying this with such solutions would be like making compettitve car from LEGOs. Once you jump over some obstacle, it becomes discriminating factor between yours and other solutions.

                Speaking for myself I dislike PICs. PIC16 arch sucks balls. And two ICs? Hmm, I do not get why you can't just grab single uC with USB and enough of GPIO.
                1. PIC10/12/16/18 core is utter crap. But Microchip has very wide chip portfolio and its I/O capabilities are really good. Look at the PIC16F1527 and try to find anything with that amount of I/O stuff for that kind of money. And it works from 2V to 5V, which is kind of important for programmer as well as apps that demand noise tolerance etc.

                Also, MPLAB_X works well on Linux ( or well enough) and I already have Pickit3 and don't want to pay for all those alternative ISP programmers before I cobble up my solution.


                2. I am sick and tired of all those crappy programmers the end up either killing themselves or chip that is being programmed. Or board where that chip is mounted.

                I need something:

                - open sourced, so I can service and tweak it when needed.
                - capable of generating as well as acquiring waveforms, with chip programming being just special case of canned algorithms
                - extensible within reasonable limits
                - cheap
                - modularized. USB part and power genberation on one board, customized backend on another, final part with expensive ZIF socket on separate small board.
                - easily and cheaply serviceable
                - above all - with everything GALVANICALLY INSULATED

                You REALLY need to be insulated against target machine and that is another point where internal separation and communication through SPI comes handy, since it is easy to isolate it with fast optocouplers. PIC 16F1527 is so cheap and has plenty of IO so it doesn't make much sense to use pile of CMOS/TTLs to make backend for programming old EPROMs for example. It's much cheaper and easier to just plop a microcohntroller on the back end board and be done with it.

                I plan to modularize its FW so I can tweak it easily for some other function of some other backend board, so that I don't need to plan too much ahead for every imaginable E/EPROM.
                If I come across some new chip that needs something new and I can't make existing backend board do it, I plan to either tweak the FW and/or existing backend board and/or make new version of the board. Since boards are to be small and easily DIY-able on my inkjet and photoresist, that shouldn't be too much of PITA. I work with gEDA and export EPS, tweak them in INkscape and print them on my trusty old Epson R800 on transparency. Results are very good and it takes me usually less than an hour from starting printjob to the moment with finished 2-sided PCB in my hands.
                Last edited by Brane215; 07 October 2015, 08:44 AM.

                Comment


                • #18
                  Originally posted by Brane215 View Post
                  My point was that RS-232 is nowadays never used for its speed, so benchamrking it makes little sense.
                  Its somewhat true about benchmarks. Yet I sometimes deal with high-baud modes.

                  Who cares if your drivers can do 230 kbs or go to 1M and over which piece of wire.
                  I do. In some cases.

                  There are other solutins for that.
                  Sure, but few things can beat 3-wire UART in terms of simplicity, on both electrical and software sides. And same program on PC side would be ok with virtual UARTs too, so its more flexible in terms of upgrade and actual hardware implementation.

                  While USB-COM might be useful in some situations, it is far from universal solution as it is basically Platypus-like creature - neither bird nor mammal.
                  Yet its my favorite solution for boot debugging of, say, little ARM Linux boards, debricking routers, etc. And I'm not alone about it. Boot loader failed to find my kernel? Where to it should swear, so I can understand why system failed to boot? UART for sure. What else? Its just 3 wires and simple on software side, so even minimal boot loader few KiB long can spam to UART. I do not care about wire length in such case: it hardly exceeds 1 meter. OTOH low baud rate would make system boot slow.

                  RS-232 si good for byte fiddling with tight timing controlls with little complications.
                  That's what GPIO for. And trying to turn UART into GPIO? Down the river drifts an axe... Not to mention RS232 is bipolar and not CMOS/TTL logic levels. That's why I like 3.3V UARTs - 12V bipolar levels are archaic and can't interoperate with rest of digital circuits directly.

                  USB packetizes and bundles the traffic, so it kills all that.
                  That's where stuff like FTDI's MPSSE would help. Sure, you can implement something similar or even better with uC, using usb and GPIO. I guess that's what you're up to. FTDI allows to make it "quick-n-dirty" and it works though. And except anything else, stuff like http://randomprojects.org/wiki/RushSPI so trivial that direct toner transfer gurus can etch and solder it right on their kitchens and all parts are fairly common and can be bought even in small local components shops (yep, FTDIs are popular across DIY people, for good reasons).

                  byte ( shallow buffer) and to record and measure the time between successive bytes. USB totally kills or at least severely hampers that.
                  I'm sorry to inform you, but if your protocol heavily relies on measuring UART timings on PC side, I would consider it poor engineering practice. DOS age is over, even if some people refuse to acknowledge it. Now there is multi-tasking, API calls and very different UART kinds, including virtual ones. Say, I have GPS receiver. It accepts bluetooth connection. Then it outputs NMEA over virtual UART. Good thing about it? You can use any NMEA capable software like if it was usual COM port. Even if it actually bluetooth, that's not up to GPS related programs. Just like they do not have to care if I have SSD or HDD when they issue read() syscall. I guess in future there could be some standard USB class to do so and when software understands it, it would be ok to forget about virtual UART abstraction...

                  Not to mention all extra points of failure this gymnastics through USB brings.
                  That's correct. To some degree. On other hand, Linux usually reasonable at handling usb failures these days. And usb is a differential bus, with CRCs and retransmissions. So unlike on plain UART you would not just get arbitrary garbage without even knowing it broken. Sure, you can write own custom protocol with CRCs on top of UART. And that's where UART no longer being standard. Only few programs on the planet would understand this custom protocol. Not just it end of PnP but also puts limits on software compatibility.

                  It might be fine for some FW update or kernel console, but full, classic COM port offers more than that and that stuff might come very handy while working with microcontrollers.
                  Sure thing. And most uCs can go waaaay above 230400 on their UARTs:P. I fail to get idea why I shouldn't use hardware capabilities.

                  On micros, UART, SPI, I2C etc are cheap. USB is expensive, both in chip price as well as in development cost.
                  That's why FTDI and similar bridges appeared, in first place. They eliminate this cost, yet allowing one to interface usual PCs. Like my notebook where I do not have COM or LPT.

                  If you have to muck around with USB stack, fine. BUt why go through all that needlesly when you can just shove a byte or two over MAX232 for debug ?
                  Well, let's assume I got my notebook and max232. Now what? It does not even haves working /dev/ttyS0. OTOH, attaching FTDI based things works like a charm and I use it here and there for serial consoles, debugging, debricking and so on. If I need 12V RS232, I would hook max232 to FTDI thingie, obviously.

                  In fact I would prefer it on my desktop, too: it can go way above 115200 unlike desktop's port and haves larger buffers, relaxing timing requirements for software. Btw, in modern computers COM and LPT aren't real either. There is just LPC bus and ... yet another multi-function bridge IC, which pretends to be bunch of legacy peripherals, including COM and LPT. LPC bus isn't anyhow close to classic 16550 on its own. Dirty hacks are used to conceal this fact from rest of system.

                  This shows on component choice my programmer. Good 8-bit cheap micros with plenty I/O sell for peanut per metric ton.
                  Not really sure why it matters. I do not plan to make 10 000+ units or try to outrun chinese factories, who can do it cheaper anyway. They would just go for mask ROM and would not solder half of capacitors, "because it seems to work somehow anyway". Not to mention chinese workers are going to work almost 24/7 for food.

                  But once you demand even slow USB, prices skyrocket and number of choices drop.
                  Realistically speaking, I'm ok with paying $5 extra for my debuging tools, if it stops fucking my brain with insane DOS-aged shit. And if I do some custom device, I do not target sub-$10 markets anyway, so it hardly matters for me in most cases.

                  This is why I went with PIC16F1459 on the USB side of my programmer. It's cheapest choice with USB interface.
                  Programmer is inherently not a mass market thing and if devs like it, they can surely afford some $5 extra, etc. On other hand since it unlikely to be sold in 10K+ amounts, its a big question if your efforts spent on learning a lot of stuff about USB would cover unit's cost reduction. I doubt it.

                  I'll connect it with micros like PIC16F1527 in QFP64 through SPI port ( optically separated), for backend work.
                  I do not have major objections about such design itself. Except it takes a lot of efforts to get there and ... what superb benefits it can offer compared to different designs? I can understand if you want to learn USB though, etc.

                  IOW, I plan to use each protocol where it's good. USB on the side of communication with PC, since programming something like modern 64Mbit FLASH might take much too long over RS-232 and SPI within the programmer.
                  Ahh, sure thing. But people are already flashing such SPI ICs here and there, using some mere FTDIs which can do reasonably fast SPI for them, thanks to MPSSE.

                  And I wonder about one thing: there're plenty SPI flashes, I2C EEPROMs and so on. They could have different programming algos, etc. Who would contain algos for all IC programming in your design? FTDI solution is dumbass but it haves one advantage: you can just upgrade/rewrite PC-side program and rock-n-roll with new ICs. For jtag its hard to beat OpenOCD, for flashes - http://linux.die.net/man/8/flashrom can handle quite a lot of SPI flashes, etc. Though I guess you can hook 'em to your programmer, too. But its not a PnP, you have to specify "adapter" for openocd and flashrom. And its okay to have more than 1 "adapter" in your system so they have no chance to guess it properly anyway.

                  Comment


                  • #19
                    Originally posted by SystemCrasher View Post
                    Its somewhat true about benchmarks. Yet I sometimes deal with high-baud modes.


                    I do. In some cases.
                    If that's some significant branch of COM port useage, fine. But even then it would be nice to have somewhat parametrised test with declared wire length and capabilities etc.


                    Originally posted by SystemCrasher View Post
                    I'm sorry to inform you, but if your protocol heavily relies on measuring UART timings on PC side, I would consider it poor engineering practice.
                    It's part of existing, quite expensive eqipment that I have to work around. Telling a customer to spit a ?20k to buy new gear just so I can feel modern is out of the question.

                    DOS age is over, even if some people refuse to acknowledge it. Now there is multi-tasking, API calls and very different UART kinds, including virtual ones.
                    Or writing a specialised driver within kernel, which would reserve particular COM port, make new "device" and enable access to it through IOCTLs.


                    Say, I have GPS receiver. It accepts bluetooth connection. Then it outputs NMEA over virtual UART. Good thing about it? You can use any NMEA capable software like if it was usual COM port.
                    So how do you use that to synchronise internal clock to something like 1?s or better ? With ordinary COM port, it's not _that_ hard. You choose one of COM port hadshake pins that can generate an interrupt ( like RTS/CTS) and connect PPS signal to it. Then you program the HW to generate interrupt on pin change. In interrupt handler you can simply record TSC counter ( 64-bit counter with 1ns resolution) and make it available to userland through /proc. Same userland can determine and order RTC clock change on next PPS, for example.

                    All in-kernel tasks are here simple, low-CPU load and done without interrupts, in one short timeslice.

                    So how do you do equivalent solution with your combination ?

                    In fact I would prefer it on my desktop, too: it can go way above 115200 unlike desktop's port and haves larger buffers, relaxing timing requirements for software.
                    Comport useage is usually intermittent ( debug, update etc) and system load then is not a problem.

                    Btw, in modern computers COM and LPT aren't real either.
                    T

                    On my Phenom, I can see LPT and COMon their standard port addresses. Likewise with addon PCI card and I suspect with PCIe version it would be the same.
                    I have played with AM1 board recently and IIRC its the same there, too. All those functions are within SuperIO, even though they could be added through LPC/TPM port.

                    But even if they were done through MEMIO, as long as there is direct access to the metal and open source kernel driver, it would be fine with me.

                    Not really sure why it matters. I do not plan to make 10 000+ units or try to outrun chinese factories, who can do it cheaper anyway. They would just go for mask ROM and would not solder half of capacitors, "because it seems to work somehow anyway". Not to mention chinese workers are going to work almost 24/7 for food.
                    It matters to me. With such solution I can standardise my purchases and use one chip for many roles and so buy greater qty with better price per part. This means lower manufacturing prices and less hassle with mateerial purchase. And substantial savings. And less problems/costs with servicing.

                    Comment


                    • #20
                      Originally posted by Brane215 View Post
                      that mettalic aftertaste from the can. ;o)
                      I'm somewhat agree. But at the end of day, its working device what really counts.

                      If you use everything that way, your added value gravitates toward 0 and so does your $$$.
                      My value comes from ability to solve uncommon tasks of my customers in reasonable timeframe at acceptable price with adequate quality. Since it custom stuff, its wrong to expect sub-$10 unit price. I have to cover R&D costs and it outweighs $2 savings on parts. Then it could be better idea to optimize R&D time instead, making it faster and cheaper for customers and more pleasant for me. Of course there're some limits either and I'm not a big fan of blatant overengineering.

                      So I would understand speech about part prices if it mass production, where it rewards increased R&D times adequately or used as opportunity to learn new tricks. But just doing engineering which would be valued below of McDonald's staff? Uhm, what about value?! While I'm not all about money and I like digital electronics, it either have to be fun for me or it HAVE to be paid reasonably at least. It works better when both conditions are true, ofc.

                      Even on small series, I want to have things that really work and that can pack nice bang per buck.
                      And speaking for myself I'm usually trying to get things balanced. Sure, it wouldn't be cheapest possible unit price. Instead, I would try to balance unit price vs R&D time, cost and quality in way it makes sense and good for both me and customers. I would not target areas where I can't do that. So I'm not going to compete with china factories by dumping prices, etc.

                      Trying this with such solutions would be like making compettitve car from LEGOs. Once you jump over some obstacle, it becomes discriminating factor between yours and other solutions.
                      On other hand, its possible to make impressive castle from bricks. And in electronics you HAVE to play Lego, one way or another. You do not diffuse PIC (or, better, task-optimized ASIC) yourself. You go buy it and use it as brick in your circuitry. FTDI is different brick. Its larger and allows to build faster, but puts some limitations. So if you're up for custom castle, it could be not what you want. But you have to keep in mind that trying to make super-custom castle could get way too expensive as you have to pay whole army of workers, or construction would not finish within your lifetime. It could be a problem.

                      1. PIC10/12/16/18 core is utter crap. But Microchip has very wide chip portfolio and its I/O capabilities are really good. Look at the PIC16F1527 and try to find anything with that amount of I/O stuff for that kind of money.
                      And why I have to? I do not try to compete by dumping prices. Few extra bucks saved by this activity do not cover extra time spent on R&D, and unless I'm in mood to do this for fun, it is wrong idea to my taste. And speaking for myself, I dislike ecosystem around PICs, PICs architecture, tooling and so on.

                      ...and if someone cares about "bang per buck", I can't stay silent about STM32 uC family. Hell yeah, you can get real 32-bit core, supported by decent toolchains like GCC. Uniform address space, ok for both code and data. Exception handling, catching troublesome conditions fast. Main OSC failure handling, etc. Peripherals which would beat the dust out of PIC and AVR (DMA included, etc). And all this could go below $1 in low-end parts. Welcome to the future! Half of my fellows already thrown their AVRs into oblivion. And as fancy remark about portfolio and scalability: someone recently ported Linux to STM32F4xx (high-end parts of family, 200+MHz cortex M4 with hardware floating point). That's what I would call portfolio & ability to scale.

                      And it works from 2V to 5V, which is kind of important for programmer as well as apps that demand noise tolerance etc.
                      Btw, these days 1.8V logic levels are something to consider. There're already some 1.8V-only systems. Over time it would get worse.

                      Also, MPLAB_X works well on Linux ( or well enough) and I already have Pickit3 and don't want to pay for all those alternative ISP programmers before I cobble up my solution.
                      And when it comes to uCs, I prefer to program something like STM32, at least it comes with sane CPU core & memory model, I can use GCC C toolchain I'm familiar with and editor of my choice, uploading firmware via one of FTDI dongles, using STM's builtin boot loader to inject my code, at least initially. Even my notebook can handle it, which is convenient. There is one drawback: STM32 is more complicated than PIC or AVR. Though these days I mostly deal with even larger and more comlpex "application" SoCs runing Linux. I like Linux and it gives me a lot of room for fun.

                      2. I am sick and tired of all those crappy programmers the end up either killing themselves or chip that is being programmed. Or board where that chip is mounted.
                      Sounds reasonable. But there are some things which are not very easy to solve. Say, once you limit current from outputs to prevent frying 'em on short-circuit or levels clashing, you also limit slew rate and making things slow. What you're going to do about it?

                      - open sourced, so I can service and tweak it when needed.
                      Good wish. Yet, opensource makes sense when you can get others on your side, so they going to do part of work. I doubt many opensource fans would be happy to deal with proprietary MPLAB stuff and proprietary-minded ecosystem around PICs.

                      - capable of generating as well as acquiring waveforms, with chip programming being just special case of canned algorithms
                      In fact, even FTDI can do something like this: FTDI2232H and somesuch can turn into stuff like 4 x 8-bit "ports" if serial bus isn't what you want. Sure, uC could do "pure bitbang" faster and better. But it also takes more efforts.

                      - extensible within reasonable limits
                      Then it should be recognized by tools like OpenOCD and flashrom I guess. Unless you dream to implement load of flash programming algos, jtag/swd stuff, etc - for something like 100+ different ICs, etc.

                      - cheap
                      IMHO it's not a topmost priority for programmer. Speaking for myself, I do not care about +/-$5 for programmer parts. And those who do, could find ancient PC in pile of garbage and use Just Bunch Of Wires to LPT. It costs $0.

                      - modularized. USB part and power genberation on one board, customized backend on another, final part with expensive ZIF socket on separate small board.
                      Sounds like an interesting approach. Though wouldnt it make whole design complicated? And what about reliability? Bunch of connectors on the way of MHz-range signals is a potentially troublesome thing.

                      - easily and cheaply serviceable
                      This is good for any devtool, etc

                      - above all - with everything GALVANICALLY INSULATED
                      Depends on how you define "everything". Would you separate grounds and powers? And I'm curious how you plan to do it.

                      You REALLY need to be insulated against target machine and that is another point where internal separation and communication through SPI comes handy,
                      IMHO it depends. Notebook running on battery is a "floating ground" thing, so only difference of voltages between connecting wires what really matters. It can be tamed without full galvanic insulation. But if you plug computer into mains... ok, that's where it could matter for in-circuit programming of another mains-powered device. Not a big deal for programming ICs, since ICs aren't power source, so all voltages and currents are inherently limited to values "host" is willing to provide. But still, good idea to my taste.

                      since it is easy to isolate it with fast optocouplers.
                      Somehow, I recently stumbled on some fancy "digital isolator" ICs. So if I would want it quick, dirty, working tomorrow and able to use flashrom or openocd with it - I would consider attaching one of these digital isolators to FTDI and power one domain of digital isolator from target system and another from FTDI side. I guess you would dislike such solution though.

                      PIC 16F1527 is so cheap and has plenty of IO so it doesn't make much sense to use pile of CMOS/TTLs to make backend for programming old EPROMs for example. It's much cheaper and easier to just plop a microcohntroller on the back end board and be done with it.
                      I would agree that uC beats pile of logic ICs. Though I dislike PICs and I actually got bunch STM32, low end parts were below $1 price tag (!!!!). Though cheapest parts lack USB. But very cool things overall.

                      I plan to modularize its FW so I can tweak it easily for some other function of some other backend board, so that I don't need to plan too much ahead for every imaginable E/EPROM
                      You can take a look on flashrom and openocd to get some ideas how it could be done (they are doing this via FTDI based things, etc as well as using some few more smart programmers). Not sure you'll like what you see, but still...

                      If I come across some new chip that needs something new and I can't make existing backend board do it, I plan to either tweak the FW and/or existing backend board and/or make new version of the board. Since boards are to be small and easily DIY-able on my inkjet and photoresist, that shouldn't be too much of PITA.
                      There is one funny catch with all uC-based programmers. Imagine I'm noob and do not have any programmer-like circuits yet. So I got idea I need one. I stumble on "that superb programmer" project. And suddenly understand that to program uC I would need... wait, have you ever heard about "pkunzip.zip"? That's what makes uC-based programmers funny . Btw, FTDIs do not have this dumb issue: it just works once connected (and can self-program EEPROM as bonus), making "bootstrapping" easy.

                      I work with gEDA and export EPS, tweak them in INkscape and print them on my trusty old Epson R800 on transparency. Results are very good and it takes me usually less than an hour from starting printjob to the moment with finished 2-sided PCB in my hands.
                      And I got LaserJet on my side, so I'm doing "toner transfer" for ad-hoc stuff, simple prototypes and other "simple" stuff I need here and there. As for CADs ... KiCad recently got some high-profile features like fancy interactive router, differential pairs creation, matching track lengths (handy for those who deals with, say, DDR memory). And since I do not like drilling, I usually go for "mostly single-sided" SMD, where bottom layer either missing or mostly serves as ground/power plane with minimal amount of vias. SMD is a victory.

                      Comment

                      Working...
                      X