Announcement

Collapse
No announcement yet.

Linux 5.7 To See USB Fast Charge Support For Apple iOS Devices

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Guest
    Guest replied
    Originally posted by SystemCrasher View Post
    It could be just software bugs either. Furthermore, seems there could be no "dumb" HW layer to dodge this bullet anymore.
    Sure, but what about updating firmware or (mis)using "vendor" commands. Vendors like that kind of thing. And aren't terrribly great about security of all that either most of time.

    Hmm, look, nobody can cheat physics. Thinner wire imply more voltage drop and more heat. At most you can make >=5 amp cable more flexible, etc, but it would still weight something and req'd copper have to be somewhere anyway. Point is that nobody puts chips into power cube passing 16 amps - despite it being more than 3x of 5A. Should I say 16A far more potent when it comes to doing damage and excess fire?


    1) There are ton of "legacy" and non-PD devices, they aren't designed to handle 20V by any means.
    2) Only few stepdown ICs rated to 30V max, "mobile" things tend to have 6..16V ratings. "Normal" USB specs specify 9V as "emergency failure". Plenty of devices would die before this.
    3) When it comes to FETs, there is tradeoff. Its about how Qg, Rds_on and Vds, if you know what's all this about. HV fets have worse set of parameters, both discrete and builtin, at least for classic MOS designs.
    4) Buck's OVP only lasts as long as high-side FET does. Once it breaks guess what happens next. And nobody would use HV high-side fet, it jeopardizes efficiency and size.
    5) Buck doesn't performs very well under high stepdown ratios. There're ways to get around (e.g. multi-phase) but they best suited for things not constrained in terms of space.

    TL;DR: while I like idea, it results in challenging, expensive design pursuing mutually contradicting goals. At which point I doubt it going to be widespread and it may sacrifice some other points to get there, like being larger and less efficient - so only ok for e.g. laptops/large tablets that got enough headroom to deal with all of that.

    Protection is quite straightforward - zener+npn BJT as crowbar on feedback loop "aux" voltage (used to power control IC). So even cheap supplies often have this protection. Aux voltage strongly correlates to output, and so it not terribly accurate, but saves output from bizarre voltages. Flyback on its own could spit "arbitrary" voltage to output, so control and protection are mandatory, and mentioned property makes it far more scary in this regard.
    Sure, yet these come with their own tradeoffs....
    It inherently means "you can misprogram that". Overall it more expensive, complicated and fragile.

    Humm... as rather unpleasant story unrolls over globe, we'll see if this dumb consumerism antipattern would survive that.
    Well, yes, it's complex... But, it's possible to engineer safe USB-PD chargers, even if there's such complexity involved. You're making this complexity to be somewhat a big deal, but it's not.

    It's all just about the responsible behaviour of each individual: buy USB-PD chargers, which are certified and/or produced by established brands (risking their own reputation in case of shitty product).

    And, if USB-PD is such a horrible thing, how you make it sound,... Then don't worry, it gets replaced by a better standard.

    Leave a comment:


  • SystemCrasher
    replied
    Originally posted by kravemir View Post
    The possibility of new attack vectors is an issue... Perfectly agree on that. However, that's actually the only real issue I see with USB-PD standard..
    It could be just software bugs either. Furthermore, seems there could be no "dumb" HW layer to dodge this bullet anymore.
    However, a man must be very paranoid in order to think, that somebody would manufacture chargers/devices intending to fry his device(s).
    Sure, but what about updating firmware or (mis)using "vendor" commands. Vendors like that kind of thing. And aren't terrribly great about security of all that either most of time.

    Common (10A to 16A) house wiring requires 1.5mm to 2.5mm wide wires (each),.. not comparable, who would be carrying such thick cables with themselves on the road?
    Hmm, look, nobody can cheat physics. Thinner wire imply more voltage drop and more heat. At most you can make >=5 amp cable more flexible, etc, but it would still weight something and req'd copper have to be somewhere anyway. Point is that nobody puts chips into power cube passing 16 amps - despite it being more than 3x of 5A. Should I say 16A far more potent when it comes to doing damage and excess fire?

    There are step-down circuits, which can protect these weak integrated circuits easily up to 30V,.. These circuits can very well act as HW overvoltage protections, which you have mentioned,... it doesn't need to be incorporated in charger.
    1) There are ton of "legacy" and non-PD devices, they aren't designed to handle 20V by any means.
    2) Only few stepdown ICs rated to 30V max, "mobile" things tend to have 6..16V ratings. "Normal" USB specs specify 9V as "emergency failure". Plenty of devices would die before this.
    3) When it comes to FETs, there is tradeoff. Its about how Qg, Rds_on and Vds, if you know what's all this about. HV fets have worse set of parameters, both discrete and builtin, at least for classic MOS designs.
    4) Buck's OVP only lasts as long as high-side FET does. Once it breaks guess what happens next. And nobody would use HV high-side fet, it jeopardizes efficiency and size.
    5) Buck doesn't performs very well under high stepdown ratios. There're ways to get around (e.g. multi-phase) but they best suited for things not constrained in terms of space.

    TL;DR: while I like idea, it results in challenging, expensive design pursuing mutually contradicting goals. At which point I doubt it going to be widespread and it may sacrifice some other points to get there, like being larger and less efficient - so only ok for e.g. laptops/large tablets that got enough headroom to deal with all of that.

    As, not every charger is such protected, and charger could be faulty too, so it's better to protect the device itself.
    Protection is quite straightforward - zener+npn BJT as crowbar on feedback loop "aux" voltage (used to power control IC). So even cheap supplies often have this protection. Aux voltage strongly correlates to output, and so it not terribly accurate, but saves output from bizarre voltages. Flyback on its own could spit "arbitrary" voltage to output, so control and protection are mandatory, and mentioned property makes it far more scary in this regard.
    Also, there are various voltage regulators,.. and, since mobile device contains power supply itself, then it must have some power related circuitry in itself, including voltage regulators.
    Sure, yet these come with their own tradeoffs....
    Also, there are adjustable ("programmable") HW OVP circuits,
    It inherently means "you can misprogram that". Overall it more expensive, complicated and fragile.

    Sorry for the wrong term, my bad. I meant, I haven't seen anybody complaining about that, until now. Actually, I have seen lots of enthusiasm about improvements, even at cost of complexity, as the most of the people I saw care mostly about performance
    Humm... as rather unpleasant story unrolls over globe, we'll see if this dumb consumerism antipattern would survive that.

    Leave a comment:


  • fox8091
    replied
    SystemCrasher Adding the chip to make a cable an active cable costs more money than just a "dumb" cable. Any company willing to add it is also willing to spend more on a thick enough gauge of wire. Any company which adds the chip without also increasing wire gauge is actively malicious.

    Leave a comment:


  • Guest
    Guest replied
    Originally posted by SystemCrasher View Post
    4) It adds plenty of privacy woes or even funny attack vectors. Like that: Spot John Smith - ensure its him - and fry his phone. Two month later, of course. So he gets no slightest clue as to why his device is a toast. Software programming could ensure it doable.
    The possibility of new attack vectors is an issue... Perfectly agree on that. However, that's actually the only real issue I see with USB-PD standard.. However, a man must be very paranoid in order to think, that somebody would manufacture chargers/devices intending to fry his device(s).

    Hum? Standard house wiring goes well beyond of 3A, without any ICs. So this sounds like ... pretty much artificial problem to put it mildly. Furthermore, if you try to save few pennies here you can set your house on fire. Which sounds like far worse problem. Then, sure, erroneously spitting 20V to 5V-only device could safely toast it with a good probability. Look, 5V chargers usually have "hardware" overvolt protection, should things turn really bad, it would thwart uncontrolled output voltage rise. It maybe would spit out like 6V or so, but not much beyond that. But if design have to be able to output 20V under normal course of action, you can't just put "unconditional" HW OVP protection anymore. Even if there is such circuit, it have to be software-configurable to be able to output 20V at all. At this point, all it takes is software getting few things wrong. Then there could be no real "HW" level protection to handle scenario like this.

    I can understand e.g. "software supply" where SW commands 5V charger to go e.g. 3.7V instead. It both safe for 5V device and allows "direct" LiIon charge: just command charger desired voltage and connect it to battery, look up current and adjust voltage of charger to keep current stable. At best it allows to get rid of battery charge step-down (in real world since not all 5V chargers are "software" it have to stay to deal with "dumb" chargers) and improves overall system efficiency (one less power conversion). Yet it can still enjoy by "HW OVP" that stops switching if voltage exceeds like 6V or so.
    This is from electrical point of view wrong,...

    Common (10A to 16A) house wiring requires 1.5mm to 2.5mm wide wires (each),.. not comparable, who would be carrying such thick cables with themselves on the road?

    There are step-down circuits, which can protect these weak integrated circuits easily up to 30V,.. These circuits can very well act as HW overvoltage protections, which you have mentioned,... it doesn't need to be incorporated in charger. As, not every charger is such protected, and charger could be faulty too, so it's better to protect the device itself. Also, there are various voltage regulators,.. and, since mobile device contains power supply itself, then it must have some power related circuitry in itself, including voltage regulators.

    Also, there are adjustable ("programmable") HW OVP circuits, or more likely, the most of them are based on adjustable base circuit (ie. using op-amp), only just they are internally wired integrated circuits to make them achieve some specific output voltage without any extra wiring effort.

    Well, I've never authorized you to put statements on my behalf. That's about "nobody complains" of course. I dislike overengineered systems, it makes them bugged, fragile and often even malicious. And I do complain about this. Therefore your statement doesn't holds true.​ Next time please think twice before claiming "nobody" or "everybody" does (not) X.
    Sorry for the wrong term, my bad. I meant, I haven't seen anybody complaining about that, until now. Actually, I have seen lots of enthusiasm about improvements, even at cost of complexity, as the most of the people I saw care mostly about performance (assuming, there are not being knowingly made compromises regarding security... ie. recent CPU security issues)

    Leave a comment:


  • SystemCrasher
    replied
    Originally posted by kravemir View Post
    Well, of course both sides must be redesigned to support programmable higher voltage. Active cables is good idea, though, as it disallows use of shitty cables for high power.
    Seems some ppl are good at failing to evaluate all consequences of their views...
    1) I don't get how exactly the fact IC is embedded in plugs actually proves anything about cable quality or so.
    2) On other hand it adds more failure points.
    3) It makes cable more expensive, and only few vendors would supply that.
    4) It adds plenty of privacy woes or even funny attack vectors. Like that: Spot John Smith - ensure its him - and fry his phone. Two month later, of course. So he gets no slightest clue as to why his device is a toast. Software programming could ensure it doable.
    5) Furthermore, you see, thing that withstands 5V is one design, but thing that withstands 20V is entirely different story... tons of "usb-related" ICs and so on these days have like 6...6.5V as absolute max ratings. So if supply gives them 20 volts, guess what going to happen next. Reasons could be both malicious or just software error. Either way, thing sounds more fragile and complicated than it could be.
    6) Mentioned problems ensure there're some issues: it have to be expensive. So poor sales. So "magic of numbers" doesn't kicks in -> prices don't go down. You see, if you buy 100 PCS of IC its one price, and if you buy 1M ICs it different price levels any day. And it qiute hard to sell millions of expensive power supplies and cables.

    Maybe it exlains why Apple (and few other companies) rather prefer to reinvent their (incompatible) wheels. Say, I've seen 5V 4.5A charger. That clearly informs "their" phone some simpler and cheaper ways and whole thing don't have to withstand 20V anywhere.

    Going over 3A is not easy on cables anyway, and programmable higher voltage is a safer way to go...
    Hum? Standard house wiring goes well beyond of 3A, without any ICs. So this sounds like ... pretty much artificial problem to put it mildly. Furthermore, if you try to save few pennies here you can set your house on fire. Which sounds like far worse problem. Then, sure, erroneously spitting 20V to 5V-only device could safely toast it with a good probability. Look, 5V chargers usually have "hardware" overvolt protection, should things turn really bad, it would thwart uncontrolled output voltage rise. It maybe would spit out like 6V or so, but not much beyond that. But if design have to be able to output 20V under normal course of action, you can't just put "unconditional" HW OVP protection anymore. Even if there is such circuit, it have to be software-configurable to be able to output 20V at all. At this point, all it takes is software getting few things wrong. Then there could be no real "HW" level protection to handle scenario like this.

    I can understand e.g. "software supply" where SW commands 5V charger to go e.g. 3.7V instead. It both safe for 5V device and allows "direct" LiIon charge: just command charger desired voltage and connect it to battery, look up current and adjust voltage of charger to keep current stable. At best it allows to get rid of battery charge step-down (in real world since not all 5V chargers are "software" it have to stay to deal with "dumb" chargers) and improves overall system efficiency (one less power conversion). Yet it can still enjoy by "HW OVP" that stops switching if voltage exceeds like 6V or so.

    Everything gets more complicated with USB-PD, but it's much versatile standard, going up to 60W without going over 3A. And, intelligent and powerful things are complicated,.. Nobody complains how complex are current x86 and ARM cpus, they are black boxed integrated circuits, which work... Maybe USB-C isn't the best designed mechanical plug, we'll see over time.
    Well, I've never authorized you to put statements on my behalf. That's about "nobody complains" of course. I dislike overengineered systems, it makes them bugged, fragile and often even malicious. And I do complain about this. Therefore your statement doesn't holds true.​ Next time please think twice before claiming "nobody" or "everybody" does (not) X.
    Last edited by SystemCrasher; 27 February 2020, 07:22 AM.

    Leave a comment:


  • Guest
    Guest replied
    Originally posted by SystemCrasher View Post
    The problem is two-fold. On one side there is charger. On another there is device. Both have to be changed to support PD. There're some pre-made ICs - but they may also need microcontroller to handle system's "overall behavior". They cost extra $ and one have to seriously change their designs to support that. On both sides of link. To make it more fun, PD seems to require "active cables" (that boas ICs in connectors talking over same secondary link, e.g. to report it capable of 5 amps, etc). Overall this combo looks quite discouraging as it intrusive, expensive and overengineered, not to mention funny consequences, like e.g. attempts to track devices using "public" facilities and other "untrusted" chargers. Sure, if you absolutely want all that, no matter the price, you'll find solutions. However, I don't think it going to become default state of things everywhere. I guess idea of "programmable power supply" is cool and many other uses can benefit from it. Yet choice of rather arcane protocol that virtually mandates custom HW and intrusive changes to esisting designs is discouraging enough. Say TypeC on its own offers far easier "mandatory" option to negotiate charging @ 5V, 1.5 or 3 amps by merely using few resistors on host and device side and sensing voltages over same "secondary" wires. That part is far easier/cheaper/nonintrusive to implement in both charger and device sides - but it limited to 5V 3A as absolute maximum. Yet it likely to be "good enough" for smartphones, their chargers and somesuch (so far "standard" chargers rarely exceed 5V 2A, though some custom things go as far as 5V 4A or so). And well, if upgrading 5V 1A design to 5V 2A could happen in relatively straightforward manner, spitting MUCH more power or changing voltage implies, basically, complete redesign of power supply. And if you exceed certain power limits (depending on regulations, about 48W or so IIRC) you also MUST provide PFC. On very basic level, PFC is pretty much like yet another power supply - at which point design doomed to be far more expensive and complicated compared to mere "smart phone charger". It have to use like 2 sets of magnetics, MOSFETs and so on. Not to mention it going to be bulkier and heavier for some reason . At which point many users would become rather unhappy about prospect of taking that brick with them. TL;DR I do think it had chances to be a bit better than that. Also look on e.g. Apple plugs and sockets vs USB type C sockets. Apple clearly had far superior mechanic engineers compared to USB IF.
    Well, of course both sides must be redesigned to support programmable higher voltage. Active cables is good idea, though, as it disallows use of shitty cables for high power. Going over 3A is not easy on cables anyway, and programmable higher voltage is a safer way to go... Everything gets more complicated with USB-PD, but it's much versatile standard, going up to 60W without going over 3A. And, intelligent and powerful things are complicated,.. Nobody complains how complex are current x86 and ARM cpus, they are black boxed integrated circuits, which work... Maybe USB-C isn't the best designed mechanical plug, we'll see over time.
    ​​​​

    Leave a comment:


  • SystemCrasher
    replied
    Aren't there already pre-made integrated circuits handling all the "logic", which just need to be wired to power supply to control the output voltage?
    The problem is two-fold. On one side there is charger. On another there is device. Both have to be changed to support PD. There're some pre-made ICs - but they may also need microcontroller to handle system's "overall behavior". They cost extra $ and one have to seriously change their designs to support that. On both sides of link. To make it more fun, PD seems to require "active cables" (that boas ICs in connectors talking over same secondary link, e.g. to report it capable of 5 amps, etc). Overall this combo looks quite discouraging as it intrusive, expensive and overengineered, not to mention funny consequences, like e.g. attempts to track devices using "public" facilities and other "untrusted" chargers. Sure, if you absolutely want all that, no matter the price, you'll find solutions. However, I don't think it going to become default state of things everywhere. I guess idea of "programmable power supply" is cool and many other uses can benefit from it. Yet choice of rather arcane protocol that virtually mandates custom HW and intrusive changes to esisting designs is discouraging enough. Say TypeC on its own offers far easier "mandatory" option to negotiate charging @ 5V, 1.5 or 3 amps by merely using few resistors on host and device side and sensing voltages over same "secondary" wires. That part is far easier/cheaper/nonintrusive to implement in both charger and device sides - but it limited to 5V 3A as absolute maximum. Yet it likely to be "good enough" for smartphones, their chargers and somesuch (so far "standard" chargers rarely exceed 5V 2A, though some custom things go as far as 5V 4A or so). And well, if upgrading 5V 1A design to 5V 2A could happen in relatively straightforward manner, spitting MUCH more power or changing voltage implies, basically, complete redesign of power supply. And if you exceed certain power limits (depending on regulations, about 48W or so IIRC) you also MUST provide PFC. On very basic level, PFC is pretty much like yet another power supply - at which point design doomed to be far more expensive and complicated compared to mere "smart phone charger". It have to use like 2 sets of magnetics, MOSFETs and so on. Not to mention it going to be bulkier and heavier for some reason . At which point many users would become rather unhappy about prospect of taking that brick with them. TL;DR I do think it had chances to be a bit better than that. Also look on e.g. Apple plugs and sockets vs USB type C sockets. Apple clearly had far superior mechanic engineers compared to USB IF.

    Leave a comment:


  • willmore
    replied
    Originally posted by kravemir View Post

    Aren't there already pre-made integrated circuits handling all the "logic", which just need to be wired to power supply to control the output voltage?
    That's what I was thinking. To do the signaling for these non-standard protocols requires sensing non-standard voltages on the D+/D- lines. I'm curious how that's handled by normal chipsets. I'm surprised to think that there is a standard way for a driver to sense that from all chipsets. I need to poke through this patch.

    Leave a comment:


  • Guest
    Guest replied
    Originally posted by SystemCrasher View Post
    PowerDelivery is fairly complicated to implement and generally requires either considerably changing circuits to use new controller, or adding extra chip, that adds cost. And "new" type C protocol is complicated/fast enough to put "sowftare" (firmware) solutions that cost nothing more or less out of equation. I'm not exactly sure what USB IF were thinking or why they need whopping 300kbps on secondary channel to merely negotiate power and roles, but I wouldn't count on PD uniting them all - for complexity, pricing and serious HW redesign reasons.
    Aren't there already pre-made integrated circuits handling all the "logic", which just need to be wired to power supply to control the output voltage?

    Leave a comment:


  • SystemCrasher
    replied
    PowerDelivery is fairly complicated to implement and generally requires either considerably changing circuits to use new controller, or adding extra chip, that adds cost. And "new" type C protocol is complicated/fast enough to put "sowftare" (firmware) solutions that cost nothing more or less out of equation. I'm not exactly sure what USB IF were thinking or why they need whopping 300kbps on secondary channel to merely negotiate power and roles, but I wouldn't count on PD uniting them all - for complexity, pricing and serious HW redesign reasons.

    Leave a comment:

Working...
X